00:00:00.000 Started by upstream project "autotest-nightly" build number 3708 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3089 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.154 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.155 The recommended git tool is: git 00:00:00.155 using credential 00000000-0000-0000-0000-000000000002 00:00:00.157 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.189 Fetching changes from the remote Git repository 00:00:00.191 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.227 Using shallow fetch with depth 1 00:00:00.227 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.227 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.265 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.190 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.200 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.211 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:07.211 > git config core.sparsecheckout # timeout=10 00:00:07.220 > git read-tree -mu HEAD # timeout=10 00:00:07.237 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:07.257 Commit message: "inventory/dev: add missing long names" 00:00:07.257 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:07.394 [Pipeline] Start of Pipeline 00:00:07.409 [Pipeline] library 00:00:07.410 Loading library shm_lib@master 00:00:07.410 Library shm_lib@master is cached. Copying from home. 00:00:07.424 [Pipeline] node 00:00:07.432 Running on VM-host-SM17 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.435 [Pipeline] { 00:00:07.445 [Pipeline] catchError 00:00:07.446 [Pipeline] { 00:00:07.460 [Pipeline] wrap 00:00:07.471 [Pipeline] { 00:00:07.479 [Pipeline] stage 00:00:07.481 [Pipeline] { (Prologue) 00:00:07.507 [Pipeline] echo 00:00:07.508 Node: VM-host-SM17 00:00:07.516 [Pipeline] cleanWs 00:00:07.524 [WS-CLEANUP] Deleting project workspace... 00:00:07.524 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.529 [WS-CLEANUP] done 00:00:07.709 [Pipeline] setCustomBuildProperty 00:00:07.769 [Pipeline] nodesByLabel 00:00:07.771 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.781 [Pipeline] httpRequest 00:00:07.784 HttpMethod: GET 00:00:07.785 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.785 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.787 Response Code: HTTP/1.1 200 OK 00:00:07.787 Success: Status code 200 is in the accepted range: 200,404 00:00:07.788 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.739 [Pipeline] sh 00:00:09.021 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:09.042 [Pipeline] httpRequest 00:00:09.047 HttpMethod: GET 00:00:09.048 URL: http://10.211.164.101/packages/spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:00:09.048 Sending request to url: http://10.211.164.101/packages/spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:00:09.069 Response Code: HTTP/1.1 200 OK 00:00:09.069 Success: Status code 200 is in the accepted range: 200,404 00:00:09.070 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:01:25.856 [Pipeline] sh 00:01:26.136 + tar --no-same-owner -xf spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:01:29.445 [Pipeline] sh 00:01:29.723 + git -C spdk log --oneline -n5 00:01:29.723 40b11d962 lib/vhost: define timeout values when stopping a session 00:01:29.723 db19aa5bc Revert "dpdk/crypto: increase RTE_CRYPTO_MAX_DEVS to fit QAT SYM ..." 00:01:29.723 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:29.723 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:01:29.723 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:01:29.740 [Pipeline] writeFile 00:01:29.755 [Pipeline] sh 00:01:30.034 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.045 [Pipeline] sh 00:01:30.320 + cat autorun-spdk.conf 00:01:30.320 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.320 SPDK_TEST_NVME=1 00:01:30.320 SPDK_TEST_FTL=1 00:01:30.320 SPDK_TEST_ISAL=1 00:01:30.320 SPDK_RUN_ASAN=1 00:01:30.320 SPDK_RUN_UBSAN=1 00:01:30.320 SPDK_TEST_XNVME=1 00:01:30.320 SPDK_TEST_NVME_FDP=1 00:01:30.320 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.329 RUN_NIGHTLY=1 00:01:30.331 [Pipeline] } 00:01:30.344 [Pipeline] // stage 00:01:30.359 [Pipeline] stage 00:01:30.361 [Pipeline] { (Run VM) 00:01:30.376 [Pipeline] sh 00:01:30.653 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:30.654 + echo 'Start stage prepare_nvme.sh' 00:01:30.654 Start stage prepare_nvme.sh 00:01:30.654 + [[ -n 4 ]] 00:01:30.654 + disk_prefix=ex4 00:01:30.654 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:30.654 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:30.654 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:30.654 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.654 ++ SPDK_TEST_NVME=1 00:01:30.654 ++ SPDK_TEST_FTL=1 00:01:30.654 ++ SPDK_TEST_ISAL=1 00:01:30.654 ++ SPDK_RUN_ASAN=1 00:01:30.654 ++ SPDK_RUN_UBSAN=1 00:01:30.654 ++ SPDK_TEST_XNVME=1 00:01:30.654 ++ SPDK_TEST_NVME_FDP=1 00:01:30.654 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.654 ++ RUN_NIGHTLY=1 00:01:30.654 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:30.654 + nvme_files=() 00:01:30.654 + declare -A nvme_files 00:01:30.654 + backend_dir=/var/lib/libvirt/images/backends 00:01:30.654 + nvme_files['nvme.img']=5G 00:01:30.654 + nvme_files['nvme-cmb.img']=5G 00:01:30.654 + nvme_files['nvme-multi0.img']=4G 00:01:30.654 + nvme_files['nvme-multi1.img']=4G 00:01:30.654 + nvme_files['nvme-multi2.img']=4G 00:01:30.654 + nvme_files['nvme-openstack.img']=8G 00:01:30.654 + nvme_files['nvme-zns.img']=5G 00:01:30.654 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:30.654 + (( SPDK_TEST_FTL == 1 )) 00:01:30.654 + nvme_files["nvme-ftl.img"]=6G 00:01:30.654 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:30.654 + nvme_files["nvme-fdp.img"]=1G 00:01:30.654 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:30.654 + for nvme in "${!nvme_files[@]}" 00:01:30.654 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:30.654 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.654 + for nvme in "${!nvme_files[@]}" 00:01:30.654 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:30.654 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:30.654 + for nvme in "${!nvme_files[@]}" 00:01:30.654 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:31.220 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.220 + for nvme in "${!nvme_files[@]}" 00:01:31.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:31.220 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.220 + for nvme in "${!nvme_files[@]}" 00:01:31.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:31.220 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.220 + for nvme in "${!nvme_files[@]}" 00:01:31.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:31.220 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.220 + for nvme in "${!nvme_files[@]}" 00:01:31.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:31.220 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.479 + for nvme in "${!nvme_files[@]}" 00:01:31.479 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:31.479 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:31.479 + for nvme in "${!nvme_files[@]}" 00:01:31.479 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:32.046 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.046 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:32.304 + echo 'End stage prepare_nvme.sh' 00:01:32.304 End stage prepare_nvme.sh 00:01:32.316 [Pipeline] sh 00:01:32.594 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.595 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:32.595 00:01:32.595 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:32.595 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:32.595 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:32.595 HELP=0 00:01:32.595 DRY_RUN=0 00:01:32.595 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:32.595 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:32.595 NVME_AUTO_CREATE=0 00:01:32.595 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:32.595 NVME_CMB=,,,, 00:01:32.595 NVME_PMR=,,,, 00:01:32.595 NVME_ZNS=,,,, 00:01:32.595 NVME_MS=true,,,, 00:01:32.595 NVME_FDP=,,,on, 00:01:32.595 SPDK_VAGRANT_DISTRO=fedora38 00:01:32.595 SPDK_VAGRANT_VMCPU=10 00:01:32.595 SPDK_VAGRANT_VMRAM=12288 00:01:32.595 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.595 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.595 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.595 SPDK_OPENSTACK_NETWORK=0 00:01:32.595 VAGRANT_PACKAGE_BOX=0 00:01:32.595 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.595 FORCE_DISTRO=true 00:01:32.595 VAGRANT_BOX_VERSION= 00:01:32.595 EXTRA_VAGRANTFILES= 00:01:32.595 NIC_MODEL=e1000 00:01:32.595 00:01:32.595 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:32.595 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:35.879 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.446 ==> default: Creating image (snapshot of base box volume). 00:01:36.446 ==> default: Creating domain with the following settings... 00:01:36.446 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715795608_24b2e55d363d4ef0bd23 00:01:36.446 ==> default: -- Domain type: kvm 00:01:36.446 ==> default: -- Cpus: 10 00:01:36.446 ==> default: -- Feature: acpi 00:01:36.446 ==> default: -- Feature: apic 00:01:36.446 ==> default: -- Feature: pae 00:01:36.446 ==> default: -- Memory: 12288M 00:01:36.446 ==> default: -- Memory Backing: hugepages: 00:01:36.446 ==> default: -- Management MAC: 00:01:36.446 ==> default: -- Loader: 00:01:36.446 ==> default: -- Nvram: 00:01:36.446 ==> default: -- Base box: spdk/fedora38 00:01:36.446 ==> default: -- Storage pool: default 00:01:36.446 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715795608_24b2e55d363d4ef0bd23.img (20G) 00:01:36.446 ==> default: -- Volume Cache: default 00:01:36.446 ==> default: -- Kernel: 00:01:36.446 ==> default: -- Initrd: 00:01:36.446 ==> default: -- Graphics Type: vnc 00:01:36.446 ==> default: -- Graphics Port: -1 00:01:36.446 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.446 ==> default: -- Graphics Password: Not defined 00:01:36.446 ==> default: -- Video Type: cirrus 00:01:36.446 ==> default: -- Video VRAM: 9216 00:01:36.446 ==> default: -- Sound Type: 00:01:36.446 ==> default: -- Keymap: en-us 00:01:36.446 ==> default: -- TPM Path: 00:01:36.446 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.446 ==> default: -- Command line args: 00:01:36.446 ==> default: -> value=-device, 00:01:36.446 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:36.446 ==> default: -> value=-drive, 00:01:36.446 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:36.446 ==> default: -> value=-device, 00:01:36.446 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:36.446 ==> default: -> value=-device, 00:01:36.446 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:36.446 ==> default: -> value=-drive, 00:01:36.446 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:36.447 ==> default: -> value=-drive, 00:01:36.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.447 ==> default: -> value=-drive, 00:01:36.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.447 ==> default: -> value=-drive, 00:01:36.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:36.447 ==> default: -> value=-drive, 00:01:36.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:36.447 ==> default: -> value=-device, 00:01:36.447 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.705 ==> default: Creating shared folders metadata... 00:01:36.705 ==> default: Starting domain. 00:01:38.080 ==> default: Waiting for domain to get an IP address... 00:02:00.010 ==> default: Waiting for SSH to become available... 00:02:00.010 ==> default: Configuring and enabling network interfaces... 00:02:01.937 default: SSH address: 192.168.121.203:22 00:02:01.937 default: SSH username: vagrant 00:02:01.937 default: SSH auth method: private key 00:02:04.503 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:12.682 ==> default: Mounting SSHFS shared folder... 00:02:13.618 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:13.619 ==> default: Checking Mount.. 00:02:14.555 ==> default: Folder Successfully Mounted! 00:02:14.555 ==> default: Running provisioner: file... 00:02:15.491 default: ~/.gitconfig => .gitconfig 00:02:16.059 00:02:16.059 SUCCESS! 00:02:16.059 00:02:16.059 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:16.059 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:16.059 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:16.059 00:02:16.068 [Pipeline] } 00:02:16.086 [Pipeline] // stage 00:02:16.095 [Pipeline] dir 00:02:16.096 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:16.098 [Pipeline] { 00:02:16.113 [Pipeline] catchError 00:02:16.115 [Pipeline] { 00:02:16.130 [Pipeline] sh 00:02:16.409 + vagrant ssh-config --host vagrant 00:02:16.417 + sed -ne /^Host/,$p 00:02:16.417 + tee ssh_conf 00:02:19.744 Host vagrant 00:02:19.744 HostName 192.168.121.203 00:02:19.744 User vagrant 00:02:19.744 Port 22 00:02:19.744 UserKnownHostsFile /dev/null 00:02:19.744 StrictHostKeyChecking no 00:02:19.744 PasswordAuthentication no 00:02:19.744 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:02:19.744 IdentitiesOnly yes 00:02:19.744 LogLevel FATAL 00:02:19.744 ForwardAgent yes 00:02:19.744 ForwardX11 yes 00:02:19.744 00:02:19.758 [Pipeline] withEnv 00:02:19.761 [Pipeline] { 00:02:19.777 [Pipeline] sh 00:02:20.053 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:20.053 source /etc/os-release 00:02:20.053 [[ -e /image.version ]] && img=$(< /image.version) 00:02:20.053 # Minimal, systemd-like check. 00:02:20.053 if [[ -e /.dockerenv ]]; then 00:02:20.053 # Clear garbage from the node's name: 00:02:20.053 # agt-er_autotest_547-896 -> autotest_547-896 00:02:20.053 # $HOSTNAME is the actual container id 00:02:20.053 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:20.053 if mountpoint -q /etc/hostname; then 00:02:20.053 # We can assume this is a mount from a host where container is running, 00:02:20.053 # so fetch its hostname to easily identify the target swarm worker. 00:02:20.053 container="$(< /etc/hostname) ($agent)" 00:02:20.053 else 00:02:20.053 # Fallback 00:02:20.053 container=$agent 00:02:20.053 fi 00:02:20.053 fi 00:02:20.053 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:20.053 00:02:20.065 [Pipeline] } 00:02:20.087 [Pipeline] // withEnv 00:02:20.096 [Pipeline] setCustomBuildProperty 00:02:20.110 [Pipeline] stage 00:02:20.113 [Pipeline] { (Tests) 00:02:20.131 [Pipeline] sh 00:02:20.509 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:20.780 [Pipeline] timeout 00:02:20.780 Timeout set to expire in 40 min 00:02:20.782 [Pipeline] { 00:02:20.797 [Pipeline] sh 00:02:21.075 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:21.643 HEAD is now at 40b11d962 lib/vhost: define timeout values when stopping a session 00:02:21.652 [Pipeline] sh 00:02:21.925 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:22.194 [Pipeline] sh 00:02:22.468 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:22.739 [Pipeline] sh 00:02:23.016 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:23.275 ++ readlink -f spdk_repo 00:02:23.275 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:23.275 + [[ -n /home/vagrant/spdk_repo ]] 00:02:23.275 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:23.275 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:23.275 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:23.275 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:23.275 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:23.275 + cd /home/vagrant/spdk_repo 00:02:23.275 + source /etc/os-release 00:02:23.275 ++ NAME='Fedora Linux' 00:02:23.275 ++ VERSION='38 (Cloud Edition)' 00:02:23.275 ++ ID=fedora 00:02:23.275 ++ VERSION_ID=38 00:02:23.275 ++ VERSION_CODENAME= 00:02:23.275 ++ PLATFORM_ID=platform:f38 00:02:23.275 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:23.275 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.275 ++ LOGO=fedora-logo-icon 00:02:23.275 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:23.275 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.275 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:23.275 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.275 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.275 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.275 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:23.275 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.275 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:23.275 ++ SUPPORT_END=2024-05-14 00:02:23.275 ++ VARIANT='Cloud Edition' 00:02:23.275 ++ VARIANT_ID=cloud 00:02:23.275 + uname -a 00:02:23.275 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:23.275 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:23.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:23.792 Hugepages 00:02:23.792 node hugesize free / total 00:02:23.792 node0 1048576kB 0 / 0 00:02:23.793 node0 2048kB 0 / 0 00:02:23.793 00:02:23.793 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.793 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:23.793 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:23.793 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:24.052 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:24.052 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:24.052 + rm -f /tmp/spdk-ld-path 00:02:24.052 + source autorun-spdk.conf 00:02:24.052 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.052 ++ SPDK_TEST_NVME=1 00:02:24.052 ++ SPDK_TEST_FTL=1 00:02:24.052 ++ SPDK_TEST_ISAL=1 00:02:24.052 ++ SPDK_RUN_ASAN=1 00:02:24.052 ++ SPDK_RUN_UBSAN=1 00:02:24.052 ++ SPDK_TEST_XNVME=1 00:02:24.052 ++ SPDK_TEST_NVME_FDP=1 00:02:24.052 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.052 ++ RUN_NIGHTLY=1 00:02:24.052 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:24.052 + [[ -n '' ]] 00:02:24.052 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:24.052 + for M in /var/spdk/build-*-manifest.txt 00:02:24.052 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:24.052 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.052 + for M in /var/spdk/build-*-manifest.txt 00:02:24.052 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:24.052 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.052 ++ uname 00:02:24.052 + [[ Linux == \L\i\n\u\x ]] 00:02:24.052 + sudo dmesg -T 00:02:24.052 + sudo dmesg --clear 00:02:24.052 + dmesg_pid=5131 00:02:24.052 + [[ Fedora Linux == FreeBSD ]] 00:02:24.052 + sudo dmesg -Tw 00:02:24.052 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.052 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.052 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:24.052 + [[ -x /usr/src/fio-static/fio ]] 00:02:24.052 + export FIO_BIN=/usr/src/fio-static/fio 00:02:24.052 + FIO_BIN=/usr/src/fio-static/fio 00:02:24.052 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:24.052 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:24.052 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:24.052 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.052 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.052 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:24.052 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.052 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.052 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:24.052 Test configuration: 00:02:24.052 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.052 SPDK_TEST_NVME=1 00:02:24.052 SPDK_TEST_FTL=1 00:02:24.052 SPDK_TEST_ISAL=1 00:02:24.052 SPDK_RUN_ASAN=1 00:02:24.052 SPDK_RUN_UBSAN=1 00:02:24.052 SPDK_TEST_XNVME=1 00:02:24.052 SPDK_TEST_NVME_FDP=1 00:02:24.052 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.052 RUN_NIGHTLY=1 17:54:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:24.052 17:54:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:24.052 17:54:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.052 17:54:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.052 17:54:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.052 17:54:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.052 17:54:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.052 17:54:16 -- paths/export.sh@5 -- $ export PATH 00:02:24.052 17:54:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.052 17:54:16 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:24.052 17:54:16 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:24.052 17:54:16 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715795656.XXXXXX 00:02:24.311 17:54:16 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715795656.yMqyVL 00:02:24.311 17:54:16 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:24.311 17:54:16 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:24.311 17:54:16 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:24.311 17:54:16 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:24.311 17:54:16 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:24.311 17:54:16 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:24.311 17:54:16 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:24.311 17:54:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.311 17:54:16 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:24.311 17:54:16 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:24.311 17:54:16 -- pm/common@17 -- $ local monitor 00:02:24.311 17:54:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.311 17:54:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.311 17:54:16 -- pm/common@21 -- $ date +%s 00:02:24.311 17:54:16 -- pm/common@25 -- $ sleep 1 00:02:24.311 17:54:16 -- pm/common@21 -- $ date +%s 00:02:24.311 17:54:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715795656 00:02:24.311 17:54:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715795656 00:02:24.311 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715795656_collect-vmstat.pm.log 00:02:24.311 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715795656_collect-cpu-load.pm.log 00:02:25.247 17:54:17 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:25.247 17:54:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:25.247 17:54:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:25.247 17:54:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:25.247 17:54:17 -- spdk/autobuild.sh@16 -- $ date -u 00:02:25.247 Wed May 15 05:54:17 PM UTC 2024 00:02:25.247 17:54:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:25.247 v24.05-pre-664-g40b11d962 00:02:25.247 17:54:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:25.247 17:54:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:25.247 17:54:17 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:25.247 17:54:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:25.247 17:54:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.247 ************************************ 00:02:25.247 START TEST asan 00:02:25.247 ************************************ 00:02:25.247 using asan 00:02:25.247 17:54:17 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:25.247 00:02:25.247 real 0m0.000s 00:02:25.247 user 0m0.000s 00:02:25.247 sys 0m0.000s 00:02:25.247 17:54:17 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:25.247 17:54:17 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.247 ************************************ 00:02:25.247 END TEST asan 00:02:25.247 ************************************ 00:02:25.247 17:54:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:25.247 17:54:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:25.247 17:54:17 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:25.247 17:54:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:25.247 17:54:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.247 ************************************ 00:02:25.247 START TEST ubsan 00:02:25.247 ************************************ 00:02:25.247 using ubsan 00:02:25.247 17:54:17 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:25.247 00:02:25.247 real 0m0.000s 00:02:25.247 user 0m0.000s 00:02:25.247 sys 0m0.000s 00:02:25.247 17:54:17 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:25.247 ************************************ 00:02:25.247 END TEST ubsan 00:02:25.247 17:54:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.247 ************************************ 00:02:25.247 17:54:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:25.247 17:54:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.247 17:54:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.247 17:54:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.247 17:54:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.247 17:54:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.247 17:54:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.247 17:54:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.247 17:54:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:25.506 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:25.506 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.764 Using 'verbs' RDMA provider 00:02:41.688 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:53.916 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:53.916 Creating mk/config.mk...done. 00:02:53.916 Creating mk/cc.flags.mk...done. 00:02:53.916 Type 'make' to build. 00:02:53.916 17:54:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:53.916 17:54:45 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:53.916 17:54:45 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:53.916 17:54:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.916 ************************************ 00:02:53.916 START TEST make 00:02:53.916 ************************************ 00:02:53.916 17:54:45 make -- common/autotest_common.sh@1121 -- $ make -j10 00:02:53.916 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:53.916 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:53.916 meson setup builddir \ 00:02:53.916 -Dwith-libaio=enabled \ 00:02:53.916 -Dwith-liburing=enabled \ 00:02:53.916 -Dwith-libvfn=disabled \ 00:02:53.916 -Dwith-spdk=false && \ 00:02:53.916 meson compile -C builddir && \ 00:02:53.916 cd -) 00:02:53.916 make[1]: Nothing to be done for 'all'. 00:02:55.819 The Meson build system 00:02:55.819 Version: 1.3.1 00:02:55.819 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:55.819 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:55.819 Build type: native build 00:02:55.819 Project name: xnvme 00:02:55.819 Project version: 0.7.3 00:02:55.819 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:55.819 C linker for the host machine: cc ld.bfd 2.39-16 00:02:55.819 Host machine cpu family: x86_64 00:02:55.819 Host machine cpu: x86_64 00:02:55.819 Message: host_machine.system: linux 00:02:55.819 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:55.819 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:55.819 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:55.819 Run-time dependency threads found: YES 00:02:55.819 Has header "setupapi.h" : NO 00:02:55.819 Has header "linux/blkzoned.h" : YES 00:02:55.819 Has header "linux/blkzoned.h" : YES (cached) 00:02:55.819 Has header "libaio.h" : YES 00:02:55.819 Library aio found: YES 00:02:55.819 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:55.819 Run-time dependency liburing found: YES 2.2 00:02:55.819 Dependency libvfn skipped: feature with-libvfn disabled 00:02:55.819 Run-time dependency appleframeworks found: NO (tried framework) 00:02:55.819 Run-time dependency appleframeworks found: NO (tried framework) 00:02:55.819 Configuring xnvme_config.h using configuration 00:02:55.819 Configuring xnvme.spec using configuration 00:02:55.819 Run-time dependency bash-completion found: YES 2.11 00:02:55.819 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:55.819 Program cp found: YES (/usr/bin/cp) 00:02:55.819 Has header "winsock2.h" : NO 00:02:55.819 Has header "dbghelp.h" : NO 00:02:55.819 Library rpcrt4 found: NO 00:02:55.819 Library rt found: YES 00:02:55.819 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:55.819 Found CMake: /usr/bin/cmake (3.27.7) 00:02:55.819 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:55.819 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:55.819 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:55.819 Build targets in project: 32 00:02:55.819 00:02:55.819 xnvme 0.7.3 00:02:55.819 00:02:55.819 User defined options 00:02:55.819 with-libaio : enabled 00:02:55.819 with-liburing: enabled 00:02:55.819 with-libvfn : disabled 00:02:55.819 with-spdk : false 00:02:55.819 00:02:55.819 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.077 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:56.077 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:56.336 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:56.336 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:56.336 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:56.336 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:56.336 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:56.336 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:56.336 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:56.336 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:56.336 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:56.336 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:56.336 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:56.336 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:56.336 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:56.594 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:56.594 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:56.594 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:56.594 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:56.594 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:56.594 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:56.594 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:56.594 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:56.594 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:56.594 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:56.594 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:56.594 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:56.594 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:56.594 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:56.594 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:56.594 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:56.594 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:56.594 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:56.594 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:56.594 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:56.594 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:56.594 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:56.594 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:56.594 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:56.594 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:56.594 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:56.852 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:56.852 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:56.852 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:56.852 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:56.852 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:56.852 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:56.852 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:56.852 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:56.852 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:56.852 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:56.852 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:56.852 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:56.852 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:56.852 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:56.852 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:56.852 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:56.852 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:56.852 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:56.852 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:56.852 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:56.852 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:57.110 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:57.110 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:57.110 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:57.110 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:57.110 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:57.110 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:57.110 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:57.110 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:57.110 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:57.110 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:57.110 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:57.110 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:57.110 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:57.110 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:57.110 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:57.368 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:57.368 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:57.368 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:57.368 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:57.368 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:57.368 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:57.368 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:57.368 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:57.368 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:57.368 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:57.368 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:57.368 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:57.368 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:57.368 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:57.369 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:57.627 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:57.627 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:57.627 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:57.627 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:57.627 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:57.627 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:57.627 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:57.627 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:57.627 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:57.627 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:57.627 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:57.627 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:57.627 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:57.627 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:57.627 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:57.627 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:57.627 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:57.627 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:57.627 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:57.627 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:57.627 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:57.627 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:57.627 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:57.627 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:57.627 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:57.627 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:57.627 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:57.627 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:57.627 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:57.885 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:57.885 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:57.885 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:57.885 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:57.885 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:57.885 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:57.885 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:57.885 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:57.885 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:57.885 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:57.885 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:57.885 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:57.885 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:57.885 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:57.885 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:58.143 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:58.143 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:58.143 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:58.143 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:58.143 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:58.143 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:58.143 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:58.143 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:58.143 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:58.143 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:58.143 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:58.143 [147/203] Linking target lib/libxnvme.so 00:02:58.143 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:58.143 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:58.401 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:58.401 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:58.401 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:58.401 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:58.401 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:58.401 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:58.401 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:58.401 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:58.401 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:58.401 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:58.401 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:58.401 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:58.401 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:58.659 [163/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:58.659 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:58.659 [165/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:58.659 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:58.659 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:58.659 [168/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:58.659 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:58.659 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:58.659 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:58.916 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:58.916 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:58.916 [174/203] Linking static target lib/libxnvme.a 00:02:58.916 [175/203] Linking target tests/xnvme_tests_enum 00:02:58.916 [176/203] Linking target tests/xnvme_tests_buf 00:02:58.916 [177/203] Linking target tests/xnvme_tests_async_intf 00:02:58.917 [178/203] Linking target tests/xnvme_tests_lblk 00:02:58.917 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:02:58.917 [180/203] Linking target tests/xnvme_tests_znd_state 00:02:58.917 [181/203] Linking target tests/xnvme_tests_scc 00:02:58.917 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:58.917 [183/203] Linking target tests/xnvme_tests_cli 00:02:58.917 [184/203] Linking target tests/xnvme_tests_ioworker 00:02:58.917 [185/203] Linking target tests/xnvme_tests_znd_append 00:02:59.174 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:59.174 [187/203] Linking target tests/xnvme_tests_kvs 00:02:59.175 [188/203] Linking target tools/lblk 00:02:59.175 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:59.175 [190/203] Linking target tests/xnvme_tests_map 00:02:59.175 [191/203] Linking target tools/xdd 00:02:59.175 [192/203] Linking target tools/xnvme 00:02:59.175 [193/203] Linking target tools/xnvme_file 00:02:59.175 [194/203] Linking target tools/zoned 00:02:59.175 [195/203] Linking target examples/xnvme_enum 00:02:59.175 [196/203] Linking target examples/xnvme_dev 00:02:59.175 [197/203] Linking target tools/kvs 00:02:59.175 [198/203] Linking target examples/xnvme_single_sync 00:02:59.175 [199/203] Linking target examples/xnvme_io_async 00:02:59.175 [200/203] Linking target examples/zoned_io_async 00:02:59.175 [201/203] Linking target examples/xnvme_hello 00:02:59.175 [202/203] Linking target examples/xnvme_single_async 00:02:59.175 [203/203] Linking target examples/zoned_io_sync 00:02:59.175 INFO: autodetecting backend as ninja 00:02:59.175 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:59.175 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:07.282 The Meson build system 00:03:07.282 Version: 1.3.1 00:03:07.282 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:07.282 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:07.282 Build type: native build 00:03:07.282 Program cat found: YES (/usr/bin/cat) 00:03:07.282 Project name: DPDK 00:03:07.282 Project version: 23.11.0 00:03:07.282 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:07.282 C linker for the host machine: cc ld.bfd 2.39-16 00:03:07.282 Host machine cpu family: x86_64 00:03:07.282 Host machine cpu: x86_64 00:03:07.282 Message: ## Building in Developer Mode ## 00:03:07.282 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:07.282 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:07.282 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:07.282 Program python3 found: YES (/usr/bin/python3) 00:03:07.282 Program cat found: YES (/usr/bin/cat) 00:03:07.282 Compiler for C supports arguments -march=native: YES 00:03:07.282 Checking for size of "void *" : 8 00:03:07.282 Checking for size of "void *" : 8 (cached) 00:03:07.282 Library m found: YES 00:03:07.282 Library numa found: YES 00:03:07.282 Has header "numaif.h" : YES 00:03:07.282 Library fdt found: NO 00:03:07.282 Library execinfo found: NO 00:03:07.282 Has header "execinfo.h" : YES 00:03:07.282 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:07.282 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:07.282 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:07.282 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:07.282 Run-time dependency openssl found: YES 3.0.9 00:03:07.282 Run-time dependency libpcap found: YES 1.10.4 00:03:07.282 Has header "pcap.h" with dependency libpcap: YES 00:03:07.282 Compiler for C supports arguments -Wcast-qual: YES 00:03:07.282 Compiler for C supports arguments -Wdeprecated: YES 00:03:07.282 Compiler for C supports arguments -Wformat: YES 00:03:07.282 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:07.282 Compiler for C supports arguments -Wformat-security: NO 00:03:07.282 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.282 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:07.282 Compiler for C supports arguments -Wnested-externs: YES 00:03:07.282 Compiler for C supports arguments -Wold-style-definition: YES 00:03:07.282 Compiler for C supports arguments -Wpointer-arith: YES 00:03:07.282 Compiler for C supports arguments -Wsign-compare: YES 00:03:07.282 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:07.282 Compiler for C supports arguments -Wundef: YES 00:03:07.282 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.282 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:07.282 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:07.282 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.282 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:07.282 Program objdump found: YES (/usr/bin/objdump) 00:03:07.282 Compiler for C supports arguments -mavx512f: YES 00:03:07.282 Checking if "AVX512 checking" compiles: YES 00:03:07.282 Fetching value of define "__SSE4_2__" : 1 00:03:07.282 Fetching value of define "__AES__" : 1 00:03:07.282 Fetching value of define "__AVX__" : 1 00:03:07.282 Fetching value of define "__AVX2__" : 1 00:03:07.282 Fetching value of define "__AVX512BW__" : (undefined) 00:03:07.282 Fetching value of define "__AVX512CD__" : (undefined) 00:03:07.282 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:07.282 Fetching value of define "__AVX512F__" : (undefined) 00:03:07.282 Fetching value of define "__AVX512VL__" : (undefined) 00:03:07.282 Fetching value of define "__PCLMUL__" : 1 00:03:07.282 Fetching value of define "__RDRND__" : 1 00:03:07.282 Fetching value of define "__RDSEED__" : 1 00:03:07.282 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:07.282 Fetching value of define "__znver1__" : (undefined) 00:03:07.282 Fetching value of define "__znver2__" : (undefined) 00:03:07.282 Fetching value of define "__znver3__" : (undefined) 00:03:07.282 Fetching value of define "__znver4__" : (undefined) 00:03:07.282 Library asan found: YES 00:03:07.282 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:07.282 Message: lib/log: Defining dependency "log" 00:03:07.282 Message: lib/kvargs: Defining dependency "kvargs" 00:03:07.282 Message: lib/telemetry: Defining dependency "telemetry" 00:03:07.282 Library rt found: YES 00:03:07.282 Checking for function "getentropy" : NO 00:03:07.282 Message: lib/eal: Defining dependency "eal" 00:03:07.282 Message: lib/ring: Defining dependency "ring" 00:03:07.282 Message: lib/rcu: Defining dependency "rcu" 00:03:07.282 Message: lib/mempool: Defining dependency "mempool" 00:03:07.282 Message: lib/mbuf: Defining dependency "mbuf" 00:03:07.282 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:07.282 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:07.282 Compiler for C supports arguments -mpclmul: YES 00:03:07.282 Compiler for C supports arguments -maes: YES 00:03:07.282 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:07.282 Compiler for C supports arguments -mavx512bw: YES 00:03:07.282 Compiler for C supports arguments -mavx512dq: YES 00:03:07.282 Compiler for C supports arguments -mavx512vl: YES 00:03:07.282 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:07.282 Compiler for C supports arguments -mavx2: YES 00:03:07.282 Compiler for C supports arguments -mavx: YES 00:03:07.282 Message: lib/net: Defining dependency "net" 00:03:07.282 Message: lib/meter: Defining dependency "meter" 00:03:07.282 Message: lib/ethdev: Defining dependency "ethdev" 00:03:07.282 Message: lib/pci: Defining dependency "pci" 00:03:07.282 Message: lib/cmdline: Defining dependency "cmdline" 00:03:07.282 Message: lib/hash: Defining dependency "hash" 00:03:07.282 Message: lib/timer: Defining dependency "timer" 00:03:07.282 Message: lib/compressdev: Defining dependency "compressdev" 00:03:07.282 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:07.282 Message: lib/dmadev: Defining dependency "dmadev" 00:03:07.282 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:07.282 Message: lib/power: Defining dependency "power" 00:03:07.282 Message: lib/reorder: Defining dependency "reorder" 00:03:07.282 Message: lib/security: Defining dependency "security" 00:03:07.282 Has header "linux/userfaultfd.h" : YES 00:03:07.282 Has header "linux/vduse.h" : YES 00:03:07.282 Message: lib/vhost: Defining dependency "vhost" 00:03:07.282 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:07.282 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:07.282 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:07.282 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:07.282 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:07.282 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:07.282 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:07.282 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:07.282 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:07.282 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:07.282 Program doxygen found: YES (/usr/bin/doxygen) 00:03:07.282 Configuring doxy-api-html.conf using configuration 00:03:07.282 Configuring doxy-api-man.conf using configuration 00:03:07.282 Program mandb found: YES (/usr/bin/mandb) 00:03:07.282 Program sphinx-build found: NO 00:03:07.282 Configuring rte_build_config.h using configuration 00:03:07.282 Message: 00:03:07.282 ================= 00:03:07.282 Applications Enabled 00:03:07.282 ================= 00:03:07.282 00:03:07.283 apps: 00:03:07.283 00:03:07.283 00:03:07.283 Message: 00:03:07.283 ================= 00:03:07.283 Libraries Enabled 00:03:07.283 ================= 00:03:07.283 00:03:07.283 libs: 00:03:07.283 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:07.283 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:07.283 cryptodev, dmadev, power, reorder, security, vhost, 00:03:07.283 00:03:07.283 Message: 00:03:07.283 =============== 00:03:07.283 Drivers Enabled 00:03:07.283 =============== 00:03:07.283 00:03:07.283 common: 00:03:07.283 00:03:07.283 bus: 00:03:07.283 pci, vdev, 00:03:07.283 mempool: 00:03:07.283 ring, 00:03:07.283 dma: 00:03:07.283 00:03:07.283 net: 00:03:07.283 00:03:07.283 crypto: 00:03:07.283 00:03:07.283 compress: 00:03:07.283 00:03:07.283 vdpa: 00:03:07.283 00:03:07.283 00:03:07.283 Message: 00:03:07.283 ================= 00:03:07.283 Content Skipped 00:03:07.283 ================= 00:03:07.283 00:03:07.283 apps: 00:03:07.283 dumpcap: explicitly disabled via build config 00:03:07.283 graph: explicitly disabled via build config 00:03:07.283 pdump: explicitly disabled via build config 00:03:07.283 proc-info: explicitly disabled via build config 00:03:07.283 test-acl: explicitly disabled via build config 00:03:07.283 test-bbdev: explicitly disabled via build config 00:03:07.283 test-cmdline: explicitly disabled via build config 00:03:07.283 test-compress-perf: explicitly disabled via build config 00:03:07.283 test-crypto-perf: explicitly disabled via build config 00:03:07.283 test-dma-perf: explicitly disabled via build config 00:03:07.283 test-eventdev: explicitly disabled via build config 00:03:07.283 test-fib: explicitly disabled via build config 00:03:07.283 test-flow-perf: explicitly disabled via build config 00:03:07.283 test-gpudev: explicitly disabled via build config 00:03:07.283 test-mldev: explicitly disabled via build config 00:03:07.283 test-pipeline: explicitly disabled via build config 00:03:07.283 test-pmd: explicitly disabled via build config 00:03:07.283 test-regex: explicitly disabled via build config 00:03:07.283 test-sad: explicitly disabled via build config 00:03:07.283 test-security-perf: explicitly disabled via build config 00:03:07.283 00:03:07.283 libs: 00:03:07.283 metrics: explicitly disabled via build config 00:03:07.283 acl: explicitly disabled via build config 00:03:07.283 bbdev: explicitly disabled via build config 00:03:07.283 bitratestats: explicitly disabled via build config 00:03:07.283 bpf: explicitly disabled via build config 00:03:07.283 cfgfile: explicitly disabled via build config 00:03:07.283 distributor: explicitly disabled via build config 00:03:07.283 efd: explicitly disabled via build config 00:03:07.283 eventdev: explicitly disabled via build config 00:03:07.283 dispatcher: explicitly disabled via build config 00:03:07.283 gpudev: explicitly disabled via build config 00:03:07.283 gro: explicitly disabled via build config 00:03:07.283 gso: explicitly disabled via build config 00:03:07.283 ip_frag: explicitly disabled via build config 00:03:07.283 jobstats: explicitly disabled via build config 00:03:07.283 latencystats: explicitly disabled via build config 00:03:07.283 lpm: explicitly disabled via build config 00:03:07.283 member: explicitly disabled via build config 00:03:07.283 pcapng: explicitly disabled via build config 00:03:07.283 rawdev: explicitly disabled via build config 00:03:07.283 regexdev: explicitly disabled via build config 00:03:07.283 mldev: explicitly disabled via build config 00:03:07.283 rib: explicitly disabled via build config 00:03:07.283 sched: explicitly disabled via build config 00:03:07.283 stack: explicitly disabled via build config 00:03:07.283 ipsec: explicitly disabled via build config 00:03:07.283 pdcp: explicitly disabled via build config 00:03:07.283 fib: explicitly disabled via build config 00:03:07.283 port: explicitly disabled via build config 00:03:07.283 pdump: explicitly disabled via build config 00:03:07.283 table: explicitly disabled via build config 00:03:07.283 pipeline: explicitly disabled via build config 00:03:07.283 graph: explicitly disabled via build config 00:03:07.283 node: explicitly disabled via build config 00:03:07.283 00:03:07.283 drivers: 00:03:07.283 common/cpt: not in enabled drivers build config 00:03:07.283 common/dpaax: not in enabled drivers build config 00:03:07.283 common/iavf: not in enabled drivers build config 00:03:07.283 common/idpf: not in enabled drivers build config 00:03:07.283 common/mvep: not in enabled drivers build config 00:03:07.283 common/octeontx: not in enabled drivers build config 00:03:07.283 bus/auxiliary: not in enabled drivers build config 00:03:07.283 bus/cdx: not in enabled drivers build config 00:03:07.283 bus/dpaa: not in enabled drivers build config 00:03:07.283 bus/fslmc: not in enabled drivers build config 00:03:07.283 bus/ifpga: not in enabled drivers build config 00:03:07.283 bus/platform: not in enabled drivers build config 00:03:07.283 bus/vmbus: not in enabled drivers build config 00:03:07.283 common/cnxk: not in enabled drivers build config 00:03:07.283 common/mlx5: not in enabled drivers build config 00:03:07.283 common/nfp: not in enabled drivers build config 00:03:07.283 common/qat: not in enabled drivers build config 00:03:07.283 common/sfc_efx: not in enabled drivers build config 00:03:07.283 mempool/bucket: not in enabled drivers build config 00:03:07.283 mempool/cnxk: not in enabled drivers build config 00:03:07.283 mempool/dpaa: not in enabled drivers build config 00:03:07.283 mempool/dpaa2: not in enabled drivers build config 00:03:07.283 mempool/octeontx: not in enabled drivers build config 00:03:07.283 mempool/stack: not in enabled drivers build config 00:03:07.283 dma/cnxk: not in enabled drivers build config 00:03:07.283 dma/dpaa: not in enabled drivers build config 00:03:07.283 dma/dpaa2: not in enabled drivers build config 00:03:07.283 dma/hisilicon: not in enabled drivers build config 00:03:07.283 dma/idxd: not in enabled drivers build config 00:03:07.283 dma/ioat: not in enabled drivers build config 00:03:07.283 dma/skeleton: not in enabled drivers build config 00:03:07.283 net/af_packet: not in enabled drivers build config 00:03:07.283 net/af_xdp: not in enabled drivers build config 00:03:07.283 net/ark: not in enabled drivers build config 00:03:07.283 net/atlantic: not in enabled drivers build config 00:03:07.283 net/avp: not in enabled drivers build config 00:03:07.283 net/axgbe: not in enabled drivers build config 00:03:07.283 net/bnx2x: not in enabled drivers build config 00:03:07.283 net/bnxt: not in enabled drivers build config 00:03:07.283 net/bonding: not in enabled drivers build config 00:03:07.283 net/cnxk: not in enabled drivers build config 00:03:07.283 net/cpfl: not in enabled drivers build config 00:03:07.283 net/cxgbe: not in enabled drivers build config 00:03:07.283 net/dpaa: not in enabled drivers build config 00:03:07.283 net/dpaa2: not in enabled drivers build config 00:03:07.283 net/e1000: not in enabled drivers build config 00:03:07.283 net/ena: not in enabled drivers build config 00:03:07.283 net/enetc: not in enabled drivers build config 00:03:07.283 net/enetfec: not in enabled drivers build config 00:03:07.283 net/enic: not in enabled drivers build config 00:03:07.283 net/failsafe: not in enabled drivers build config 00:03:07.283 net/fm10k: not in enabled drivers build config 00:03:07.283 net/gve: not in enabled drivers build config 00:03:07.283 net/hinic: not in enabled drivers build config 00:03:07.283 net/hns3: not in enabled drivers build config 00:03:07.283 net/i40e: not in enabled drivers build config 00:03:07.283 net/iavf: not in enabled drivers build config 00:03:07.283 net/ice: not in enabled drivers build config 00:03:07.283 net/idpf: not in enabled drivers build config 00:03:07.283 net/igc: not in enabled drivers build config 00:03:07.283 net/ionic: not in enabled drivers build config 00:03:07.283 net/ipn3ke: not in enabled drivers build config 00:03:07.283 net/ixgbe: not in enabled drivers build config 00:03:07.283 net/mana: not in enabled drivers build config 00:03:07.283 net/memif: not in enabled drivers build config 00:03:07.283 net/mlx4: not in enabled drivers build config 00:03:07.283 net/mlx5: not in enabled drivers build config 00:03:07.283 net/mvneta: not in enabled drivers build config 00:03:07.283 net/mvpp2: not in enabled drivers build config 00:03:07.283 net/netvsc: not in enabled drivers build config 00:03:07.283 net/nfb: not in enabled drivers build config 00:03:07.283 net/nfp: not in enabled drivers build config 00:03:07.283 net/ngbe: not in enabled drivers build config 00:03:07.283 net/null: not in enabled drivers build config 00:03:07.283 net/octeontx: not in enabled drivers build config 00:03:07.283 net/octeon_ep: not in enabled drivers build config 00:03:07.283 net/pcap: not in enabled drivers build config 00:03:07.283 net/pfe: not in enabled drivers build config 00:03:07.283 net/qede: not in enabled drivers build config 00:03:07.283 net/ring: not in enabled drivers build config 00:03:07.283 net/sfc: not in enabled drivers build config 00:03:07.283 net/softnic: not in enabled drivers build config 00:03:07.283 net/tap: not in enabled drivers build config 00:03:07.283 net/thunderx: not in enabled drivers build config 00:03:07.283 net/txgbe: not in enabled drivers build config 00:03:07.283 net/vdev_netvsc: not in enabled drivers build config 00:03:07.283 net/vhost: not in enabled drivers build config 00:03:07.283 net/virtio: not in enabled drivers build config 00:03:07.283 net/vmxnet3: not in enabled drivers build config 00:03:07.283 raw/*: missing internal dependency, "rawdev" 00:03:07.283 crypto/armv8: not in enabled drivers build config 00:03:07.283 crypto/bcmfs: not in enabled drivers build config 00:03:07.283 crypto/caam_jr: not in enabled drivers build config 00:03:07.283 crypto/ccp: not in enabled drivers build config 00:03:07.283 crypto/cnxk: not in enabled drivers build config 00:03:07.283 crypto/dpaa_sec: not in enabled drivers build config 00:03:07.283 crypto/dpaa2_sec: not in enabled drivers build config 00:03:07.283 crypto/ipsec_mb: not in enabled drivers build config 00:03:07.283 crypto/mlx5: not in enabled drivers build config 00:03:07.283 crypto/mvsam: not in enabled drivers build config 00:03:07.283 crypto/nitrox: not in enabled drivers build config 00:03:07.283 crypto/null: not in enabled drivers build config 00:03:07.283 crypto/octeontx: not in enabled drivers build config 00:03:07.283 crypto/openssl: not in enabled drivers build config 00:03:07.283 crypto/scheduler: not in enabled drivers build config 00:03:07.283 crypto/uadk: not in enabled drivers build config 00:03:07.283 crypto/virtio: not in enabled drivers build config 00:03:07.283 compress/isal: not in enabled drivers build config 00:03:07.283 compress/mlx5: not in enabled drivers build config 00:03:07.284 compress/octeontx: not in enabled drivers build config 00:03:07.284 compress/zlib: not in enabled drivers build config 00:03:07.284 regex/*: missing internal dependency, "regexdev" 00:03:07.284 ml/*: missing internal dependency, "mldev" 00:03:07.284 vdpa/ifc: not in enabled drivers build config 00:03:07.284 vdpa/mlx5: not in enabled drivers build config 00:03:07.284 vdpa/nfp: not in enabled drivers build config 00:03:07.284 vdpa/sfc: not in enabled drivers build config 00:03:07.284 event/*: missing internal dependency, "eventdev" 00:03:07.284 baseband/*: missing internal dependency, "bbdev" 00:03:07.284 gpu/*: missing internal dependency, "gpudev" 00:03:07.284 00:03:07.284 00:03:07.284 Build targets in project: 85 00:03:07.284 00:03:07.284 DPDK 23.11.0 00:03:07.284 00:03:07.284 User defined options 00:03:07.284 buildtype : debug 00:03:07.284 default_library : shared 00:03:07.284 libdir : lib 00:03:07.284 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:07.284 b_sanitize : address 00:03:07.284 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:07.284 c_link_args : 00:03:07.284 cpu_instruction_set: native 00:03:07.284 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:07.284 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:07.284 enable_docs : false 00:03:07.284 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:07.284 enable_kmods : false 00:03:07.284 tests : false 00:03:07.284 00:03:07.284 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.284 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:07.541 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:07.541 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:07.541 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:07.541 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:07.541 [5/265] Linking static target lib/librte_kvargs.a 00:03:07.541 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:07.541 [7/265] Linking static target lib/librte_log.a 00:03:07.541 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:07.541 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:07.541 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:08.105 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.105 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:08.105 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:08.105 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:08.362 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:08.362 [16/265] Linking static target lib/librte_telemetry.a 00:03:08.362 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:08.362 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.362 [19/265] Linking target lib/librte_log.so.24.0 00:03:08.619 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:08.619 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:08.619 [22/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:08.619 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:08.882 [24/265] Linking target lib/librte_kvargs.so.24.0 00:03:08.882 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:08.882 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:09.140 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:09.140 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:09.140 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.140 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:09.398 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:09.398 [32/265] Linking target lib/librte_telemetry.so.24.0 00:03:09.398 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:09.655 [34/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:09.655 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:09.655 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:09.655 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:09.655 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:09.655 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:09.655 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:09.911 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:09.911 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:09.911 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:09.911 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.169 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:10.427 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:10.427 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:10.683 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:10.683 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:10.683 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:10.683 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:10.941 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:10.941 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:10.941 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:10.941 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:10.941 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.199 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.199 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.199 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.199 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.457 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.457 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.715 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:11.715 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.715 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.973 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.973 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.973 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:11.973 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:12.231 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:12.231 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:12.231 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:12.231 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:12.489 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:12.489 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:12.489 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:12.747 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:12.747 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:12.747 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.747 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:13.004 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:13.004 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:13.262 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:13.262 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:13.262 [85/265] Linking static target lib/librte_ring.a 00:03:13.262 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:13.519 [87/265] Linking static target lib/librte_eal.a 00:03:13.519 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:13.519 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:13.777 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:13.777 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:13.777 [92/265] Linking static target lib/librte_mempool.a 00:03:13.777 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:13.777 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:13.777 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.035 [96/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:14.035 [97/265] Linking static target lib/librte_rcu.a 00:03:14.297 [98/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:14.560 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:14.560 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:14.560 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:14.560 [102/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.560 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:14.560 [104/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:14.560 [105/265] Linking static target lib/librte_mbuf.a 00:03:14.560 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:14.560 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:15.123 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.123 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:15.123 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:15.381 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:15.381 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:15.381 [113/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:15.381 [114/265] Linking static target lib/librte_net.a 00:03:15.381 [115/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:15.381 [116/265] Linking static target lib/librte_meter.a 00:03:15.709 [117/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.966 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:15.966 [119/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.966 [120/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.966 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:16.225 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:16.225 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:16.225 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:16.225 [125/265] Linking static target lib/librte_pci.a 00:03:16.481 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:16.739 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:16.739 [128/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.739 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:16.739 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:16.739 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:16.739 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:16.739 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:16.739 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:16.996 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:16.996 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:16.996 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:16.996 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:16.996 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:16.996 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:16.996 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:17.254 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.254 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:17.254 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.254 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.513 [146/265] Linking static target lib/librte_cmdline.a 00:03:17.770 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.770 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:18.028 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:18.028 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:18.286 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:18.286 [152/265] Linking static target lib/librte_compressdev.a 00:03:18.286 [153/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:18.286 [154/265] Linking static target lib/librte_timer.a 00:03:18.286 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:18.545 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:18.545 [157/265] Linking static target lib/librte_hash.a 00:03:18.803 [158/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:18.803 [159/265] Linking static target lib/librte_ethdev.a 00:03:18.803 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:18.803 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.803 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:19.062 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:19.062 [164/265] Linking static target lib/librte_dmadev.a 00:03:19.062 [165/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:19.062 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:19.062 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.062 [168/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.062 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.629 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:19.629 [171/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.629 [172/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:19.629 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.629 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:19.888 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:19.888 [176/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:19.888 [177/265] Linking static target lib/librte_cryptodev.a 00:03:19.888 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.145 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:20.145 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:20.145 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:20.145 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.405 [183/265] Linking static target lib/librte_power.a 00:03:20.405 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.405 [185/265] Linking static target lib/librte_reorder.a 00:03:20.664 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.664 [187/265] Linking static target lib/librte_security.a 00:03:20.664 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:20.664 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:20.922 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:20.922 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.180 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.180 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.439 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:21.439 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:21.697 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:21.955 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:21.955 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.955 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.955 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:21.955 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:21.955 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:22.213 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:22.472 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:22.472 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:22.472 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:22.472 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:22.730 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:22.730 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:22.730 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.730 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.730 [212/265] Linking static target drivers/librte_bus_vdev.a 00:03:22.730 [213/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.730 [214/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:22.730 [215/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:22.730 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.730 [217/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.730 [218/265] Linking static target drivers/librte_bus_pci.a 00:03:22.989 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:22.989 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.989 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.989 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.989 [223/265] Linking static target drivers/librte_mempool_ring.a 00:03:23.556 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.491 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.491 [226/265] Linking target lib/librte_eal.so.24.0 00:03:24.491 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.491 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:24.491 [229/265] Linking target lib/librte_timer.so.24.0 00:03:24.491 [230/265] Linking target lib/librte_dmadev.so.24.0 00:03:24.491 [231/265] Linking target lib/librte_meter.so.24.0 00:03:24.491 [232/265] Linking target lib/librte_pci.so.24.0 00:03:24.491 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:24.491 [234/265] Linking target lib/librte_ring.so.24.0 00:03:24.749 [235/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:24.749 [236/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:24.749 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:24.749 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:24.749 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:24.749 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:24.749 [241/265] Linking target lib/librte_mempool.so.24.0 00:03:24.749 [242/265] Linking target lib/librte_rcu.so.24.0 00:03:25.007 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:25.007 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:25.007 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:25.007 [246/265] Linking target lib/librte_mbuf.so.24.0 00:03:25.007 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:25.267 [248/265] Linking target lib/librte_cryptodev.so.24.0 00:03:25.267 [249/265] Linking target lib/librte_reorder.so.24.0 00:03:25.267 [250/265] Linking target lib/librte_compressdev.so.24.0 00:03:25.267 [251/265] Linking target lib/librte_net.so.24.0 00:03:25.267 [252/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:25.267 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:25.267 [254/265] Linking target lib/librte_cmdline.so.24.0 00:03:25.267 [255/265] Linking target lib/librte_security.so.24.0 00:03:25.267 [256/265] Linking target lib/librte_hash.so.24.0 00:03:25.526 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:25.526 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.784 [259/265] Linking target lib/librte_ethdev.so.24.0 00:03:25.784 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:26.043 [261/265] Linking target lib/librte_power.so.24.0 00:03:28.576 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.576 [263/265] Linking static target lib/librte_vhost.a 00:03:30.480 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.480 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:30.480 INFO: autodetecting backend as ninja 00:03:30.480 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:31.441 CC lib/ut_mock/mock.o 00:03:31.441 CC lib/ut/ut.o 00:03:31.441 CC lib/log/log.o 00:03:31.441 CC lib/log/log_flags.o 00:03:31.441 CC lib/log/log_deprecated.o 00:03:31.699 LIB libspdk_ut_mock.a 00:03:31.699 SO libspdk_ut_mock.so.6.0 00:03:31.699 LIB libspdk_log.a 00:03:31.699 LIB libspdk_ut.a 00:03:31.699 SO libspdk_log.so.7.0 00:03:31.699 SO libspdk_ut.so.2.0 00:03:31.699 SYMLINK libspdk_ut_mock.so 00:03:31.699 SYMLINK libspdk_ut.so 00:03:31.699 SYMLINK libspdk_log.so 00:03:31.958 CC lib/util/base64.o 00:03:31.958 CC lib/util/bit_array.o 00:03:31.958 CC lib/util/cpuset.o 00:03:31.958 CC lib/util/crc16.o 00:03:31.958 CC lib/util/crc32.o 00:03:31.958 CC lib/util/crc32c.o 00:03:31.958 CC lib/dma/dma.o 00:03:31.958 CXX lib/trace_parser/trace.o 00:03:31.958 CC lib/ioat/ioat.o 00:03:32.217 CC lib/vfio_user/host/vfio_user_pci.o 00:03:32.217 CC lib/util/crc64.o 00:03:32.217 CC lib/util/crc32_ieee.o 00:03:32.217 CC lib/util/dif.o 00:03:32.217 CC lib/util/fd.o 00:03:32.217 CC lib/util/file.o 00:03:32.217 LIB libspdk_dma.a 00:03:32.217 CC lib/util/hexlify.o 00:03:32.217 CC lib/vfio_user/host/vfio_user.o 00:03:32.217 CC lib/util/iov.o 00:03:32.217 SO libspdk_dma.so.4.0 00:03:32.217 LIB libspdk_ioat.a 00:03:32.475 SO libspdk_ioat.so.7.0 00:03:32.475 CC lib/util/math.o 00:03:32.475 SYMLINK libspdk_dma.so 00:03:32.475 CC lib/util/pipe.o 00:03:32.475 CC lib/util/strerror_tls.o 00:03:32.475 SYMLINK libspdk_ioat.so 00:03:32.475 CC lib/util/string.o 00:03:32.475 CC lib/util/uuid.o 00:03:32.475 CC lib/util/fd_group.o 00:03:32.475 CC lib/util/xor.o 00:03:32.475 LIB libspdk_vfio_user.a 00:03:32.475 CC lib/util/zipf.o 00:03:32.475 SO libspdk_vfio_user.so.5.0 00:03:32.733 SYMLINK libspdk_vfio_user.so 00:03:32.991 LIB libspdk_util.a 00:03:33.250 SO libspdk_util.so.9.0 00:03:33.250 LIB libspdk_trace_parser.a 00:03:33.250 SO libspdk_trace_parser.so.5.0 00:03:33.508 SYMLINK libspdk_trace_parser.so 00:03:33.508 SYMLINK libspdk_util.so 00:03:33.508 CC lib/env_dpdk/env.o 00:03:33.508 CC lib/env_dpdk/memory.o 00:03:33.508 CC lib/env_dpdk/pci.o 00:03:33.508 CC lib/env_dpdk/init.o 00:03:33.508 CC lib/conf/conf.o 00:03:33.508 CC lib/env_dpdk/threads.o 00:03:33.508 CC lib/idxd/idxd.o 00:03:33.508 CC lib/rdma/common.o 00:03:33.508 CC lib/json/json_parse.o 00:03:33.508 CC lib/vmd/vmd.o 00:03:33.766 CC lib/json/json_util.o 00:03:33.766 LIB libspdk_conf.a 00:03:33.766 SO libspdk_conf.so.6.0 00:03:34.144 CC lib/rdma/rdma_verbs.o 00:03:34.144 CC lib/idxd/idxd_user.o 00:03:34.144 SYMLINK libspdk_conf.so 00:03:34.144 CC lib/vmd/led.o 00:03:34.144 CC lib/json/json_write.o 00:03:34.144 CC lib/env_dpdk/pci_ioat.o 00:03:34.144 CC lib/env_dpdk/pci_virtio.o 00:03:34.144 CC lib/env_dpdk/pci_vmd.o 00:03:34.144 LIB libspdk_rdma.a 00:03:34.144 SO libspdk_rdma.so.6.0 00:03:34.144 CC lib/env_dpdk/pci_idxd.o 00:03:34.144 CC lib/env_dpdk/pci_event.o 00:03:34.144 CC lib/env_dpdk/sigbus_handler.o 00:03:34.144 CC lib/env_dpdk/pci_dpdk.o 00:03:34.144 SYMLINK libspdk_rdma.so 00:03:34.144 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:34.409 LIB libspdk_json.a 00:03:34.409 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:34.409 SO libspdk_json.so.6.0 00:03:34.409 SYMLINK libspdk_json.so 00:03:34.409 LIB libspdk_vmd.a 00:03:34.409 LIB libspdk_idxd.a 00:03:34.671 SO libspdk_vmd.so.6.0 00:03:34.671 SO libspdk_idxd.so.12.0 00:03:34.671 SYMLINK libspdk_vmd.so 00:03:34.671 SYMLINK libspdk_idxd.so 00:03:34.671 CC lib/jsonrpc/jsonrpc_server.o 00:03:34.671 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:34.671 CC lib/jsonrpc/jsonrpc_client.o 00:03:34.671 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:34.933 LIB libspdk_jsonrpc.a 00:03:34.933 SO libspdk_jsonrpc.so.6.0 00:03:35.193 SYMLINK libspdk_jsonrpc.so 00:03:35.452 CC lib/rpc/rpc.o 00:03:35.452 LIB libspdk_env_dpdk.a 00:03:35.711 LIB libspdk_rpc.a 00:03:35.711 SO libspdk_env_dpdk.so.14.0 00:03:35.711 SO libspdk_rpc.so.6.0 00:03:35.711 SYMLINK libspdk_rpc.so 00:03:35.711 SYMLINK libspdk_env_dpdk.so 00:03:36.027 CC lib/keyring/keyring.o 00:03:36.027 CC lib/keyring/keyring_rpc.o 00:03:36.027 CC lib/trace/trace.o 00:03:36.027 CC lib/trace/trace_flags.o 00:03:36.027 CC lib/trace/trace_rpc.o 00:03:36.027 CC lib/notify/notify.o 00:03:36.027 CC lib/notify/notify_rpc.o 00:03:36.027 LIB libspdk_notify.a 00:03:36.286 SO libspdk_notify.so.6.0 00:03:36.286 LIB libspdk_keyring.a 00:03:36.286 LIB libspdk_trace.a 00:03:36.286 SYMLINK libspdk_notify.so 00:03:36.286 SO libspdk_keyring.so.1.0 00:03:36.286 SO libspdk_trace.so.10.0 00:03:36.286 SYMLINK libspdk_keyring.so 00:03:36.286 SYMLINK libspdk_trace.so 00:03:36.544 CC lib/sock/sock.o 00:03:36.544 CC lib/sock/sock_rpc.o 00:03:36.544 CC lib/thread/thread.o 00:03:36.544 CC lib/thread/iobuf.o 00:03:37.112 LIB libspdk_sock.a 00:03:37.112 SO libspdk_sock.so.9.0 00:03:37.370 SYMLINK libspdk_sock.so 00:03:37.629 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.629 CC lib/nvme/nvme_ctrlr.o 00:03:37.629 CC lib/nvme/nvme_fabric.o 00:03:37.629 CC lib/nvme/nvme_ns_cmd.o 00:03:37.629 CC lib/nvme/nvme_ns.o 00:03:37.629 CC lib/nvme/nvme_pcie.o 00:03:37.629 CC lib/nvme/nvme.o 00:03:37.629 CC lib/nvme/nvme_qpair.o 00:03:37.629 CC lib/nvme/nvme_pcie_common.o 00:03:38.563 CC lib/nvme/nvme_quirks.o 00:03:38.563 CC lib/nvme/nvme_transport.o 00:03:38.563 CC lib/nvme/nvme_discovery.o 00:03:38.563 LIB libspdk_thread.a 00:03:38.563 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:38.563 SO libspdk_thread.so.10.0 00:03:38.563 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:38.821 CC lib/nvme/nvme_tcp.o 00:03:38.821 SYMLINK libspdk_thread.so 00:03:38.821 CC lib/nvme/nvme_opal.o 00:03:38.821 CC lib/nvme/nvme_io_msg.o 00:03:39.079 CC lib/nvme/nvme_poll_group.o 00:03:39.079 CC lib/nvme/nvme_zns.o 00:03:39.338 CC lib/nvme/nvme_stubs.o 00:03:39.338 CC lib/accel/accel.o 00:03:39.338 CC lib/accel/accel_rpc.o 00:03:39.338 CC lib/accel/accel_sw.o 00:03:39.596 CC lib/nvme/nvme_auth.o 00:03:39.596 CC lib/blob/blobstore.o 00:03:39.596 CC lib/blob/request.o 00:03:39.596 CC lib/nvme/nvme_cuse.o 00:03:39.596 CC lib/blob/zeroes.o 00:03:39.855 CC lib/nvme/nvme_rdma.o 00:03:39.855 CC lib/blob/blob_bs_dev.o 00:03:40.114 CC lib/init/json_config.o 00:03:40.114 CC lib/init/subsystem.o 00:03:40.372 CC lib/virtio/virtio.o 00:03:40.372 CC lib/init/subsystem_rpc.o 00:03:40.372 CC lib/init/rpc.o 00:03:40.372 CC lib/virtio/virtio_vhost_user.o 00:03:40.629 LIB libspdk_init.a 00:03:40.629 SO libspdk_init.so.5.0 00:03:40.629 CC lib/virtio/virtio_vfio_user.o 00:03:40.629 CC lib/virtio/virtio_pci.o 00:03:40.629 SYMLINK libspdk_init.so 00:03:40.629 LIB libspdk_accel.a 00:03:40.629 SO libspdk_accel.so.15.0 00:03:40.886 SYMLINK libspdk_accel.so 00:03:40.886 CC lib/event/app.o 00:03:40.886 CC lib/event/reactor.o 00:03:40.886 CC lib/event/log_rpc.o 00:03:40.886 CC lib/event/app_rpc.o 00:03:40.886 CC lib/event/scheduler_static.o 00:03:41.145 LIB libspdk_virtio.a 00:03:41.145 SO libspdk_virtio.so.7.0 00:03:41.145 CC lib/bdev/bdev.o 00:03:41.145 CC lib/bdev/bdev_rpc.o 00:03:41.145 CC lib/bdev/bdev_zone.o 00:03:41.145 CC lib/bdev/part.o 00:03:41.145 SYMLINK libspdk_virtio.so 00:03:41.145 CC lib/bdev/scsi_nvme.o 00:03:41.403 LIB libspdk_event.a 00:03:41.403 SO libspdk_event.so.13.0 00:03:41.662 LIB libspdk_nvme.a 00:03:41.662 SYMLINK libspdk_event.so 00:03:41.662 SO libspdk_nvme.so.13.0 00:03:42.227 SYMLINK libspdk_nvme.so 00:03:44.127 LIB libspdk_blob.a 00:03:44.127 SO libspdk_blob.so.11.0 00:03:44.127 SYMLINK libspdk_blob.so 00:03:44.385 CC lib/lvol/lvol.o 00:03:44.385 CC lib/blobfs/blobfs.o 00:03:44.385 CC lib/blobfs/tree.o 00:03:44.643 LIB libspdk_bdev.a 00:03:44.643 SO libspdk_bdev.so.15.0 00:03:44.901 SYMLINK libspdk_bdev.so 00:03:45.158 CC lib/scsi/dev.o 00:03:45.159 CC lib/scsi/lun.o 00:03:45.159 CC lib/scsi/port.o 00:03:45.159 CC lib/nbd/nbd.o 00:03:45.159 CC lib/nvmf/ctrlr.o 00:03:45.159 CC lib/scsi/scsi.o 00:03:45.159 CC lib/ftl/ftl_core.o 00:03:45.159 CC lib/ublk/ublk.o 00:03:45.416 CC lib/scsi/scsi_bdev.o 00:03:45.416 LIB libspdk_blobfs.a 00:03:45.416 CC lib/scsi/scsi_pr.o 00:03:45.416 SO libspdk_blobfs.so.10.0 00:03:45.416 LIB libspdk_lvol.a 00:03:45.416 CC lib/scsi/scsi_rpc.o 00:03:45.416 SO libspdk_lvol.so.10.0 00:03:45.675 SYMLINK libspdk_blobfs.so 00:03:45.675 CC lib/scsi/task.o 00:03:45.675 SYMLINK libspdk_lvol.so 00:03:45.675 CC lib/nbd/nbd_rpc.o 00:03:45.675 CC lib/ftl/ftl_init.o 00:03:45.675 CC lib/nvmf/ctrlr_discovery.o 00:03:45.675 CC lib/ftl/ftl_layout.o 00:03:45.675 LIB libspdk_nbd.a 00:03:45.933 CC lib/ftl/ftl_debug.o 00:03:45.933 SO libspdk_nbd.so.7.0 00:03:45.933 CC lib/ftl/ftl_io.o 00:03:45.933 CC lib/nvmf/ctrlr_bdev.o 00:03:45.933 SYMLINK libspdk_nbd.so 00:03:45.933 CC lib/nvmf/subsystem.o 00:03:45.933 CC lib/nvmf/nvmf.o 00:03:45.933 LIB libspdk_scsi.a 00:03:46.191 CC lib/ftl/ftl_sb.o 00:03:46.191 SO libspdk_scsi.so.9.0 00:03:46.191 CC lib/ftl/ftl_l2p.o 00:03:46.191 SYMLINK libspdk_scsi.so 00:03:46.191 CC lib/ftl/ftl_l2p_flat.o 00:03:46.191 CC lib/ftl/ftl_nv_cache.o 00:03:46.191 CC lib/ublk/ublk_rpc.o 00:03:46.449 CC lib/ftl/ftl_band.o 00:03:46.449 CC lib/nvmf/nvmf_rpc.o 00:03:46.449 CC lib/ftl/ftl_band_ops.o 00:03:46.449 LIB libspdk_ublk.a 00:03:46.707 SO libspdk_ublk.so.3.0 00:03:46.707 SYMLINK libspdk_ublk.so 00:03:46.707 CC lib/nvmf/transport.o 00:03:46.707 CC lib/nvmf/tcp.o 00:03:46.965 CC lib/nvmf/stubs.o 00:03:46.965 CC lib/nvmf/mdns_server.o 00:03:46.965 CC lib/nvmf/rdma.o 00:03:47.222 CC lib/nvmf/auth.o 00:03:47.482 CC lib/ftl/ftl_writer.o 00:03:47.482 CC lib/ftl/ftl_rq.o 00:03:47.740 CC lib/ftl/ftl_reloc.o 00:03:47.740 CC lib/ftl/ftl_l2p_cache.o 00:03:47.740 CC lib/iscsi/conn.o 00:03:47.999 CC lib/iscsi/init_grp.o 00:03:47.999 CC lib/iscsi/iscsi.o 00:03:47.999 CC lib/iscsi/md5.o 00:03:47.999 CC lib/ftl/ftl_p2l.o 00:03:47.999 CC lib/ftl/mngt/ftl_mngt.o 00:03:48.257 CC lib/iscsi/param.o 00:03:48.257 CC lib/iscsi/portal_grp.o 00:03:48.257 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:48.516 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:48.516 CC lib/iscsi/tgt_node.o 00:03:48.516 CC lib/iscsi/iscsi_subsystem.o 00:03:48.516 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:48.516 CC lib/vhost/vhost.o 00:03:48.516 CC lib/iscsi/iscsi_rpc.o 00:03:48.516 CC lib/vhost/vhost_rpc.o 00:03:48.782 CC lib/iscsi/task.o 00:03:48.782 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:48.782 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:48.782 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:49.039 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:49.039 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:49.039 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:49.039 CC lib/vhost/vhost_scsi.o 00:03:49.039 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:49.296 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:49.296 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:49.296 CC lib/vhost/vhost_blk.o 00:03:49.296 CC lib/ftl/utils/ftl_conf.o 00:03:49.296 CC lib/vhost/rte_vhost_user.o 00:03:49.296 CC lib/ftl/utils/ftl_md.o 00:03:49.554 CC lib/ftl/utils/ftl_mempool.o 00:03:49.554 CC lib/ftl/utils/ftl_bitmap.o 00:03:49.554 CC lib/ftl/utils/ftl_property.o 00:03:49.554 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:49.554 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:49.813 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:49.813 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:49.813 LIB libspdk_nvmf.a 00:03:49.813 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:50.072 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:50.072 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:50.072 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:50.072 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:50.072 LIB libspdk_iscsi.a 00:03:50.072 SO libspdk_nvmf.so.18.0 00:03:50.072 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:50.072 SO libspdk_iscsi.so.8.0 00:03:50.072 CC lib/ftl/base/ftl_base_dev.o 00:03:50.072 CC lib/ftl/base/ftl_base_bdev.o 00:03:50.072 CC lib/ftl/ftl_trace.o 00:03:50.332 SYMLINK libspdk_iscsi.so 00:03:50.332 SYMLINK libspdk_nvmf.so 00:03:50.594 LIB libspdk_ftl.a 00:03:50.594 LIB libspdk_vhost.a 00:03:50.594 SO libspdk_vhost.so.8.0 00:03:50.594 SO libspdk_ftl.so.9.0 00:03:50.862 SYMLINK libspdk_vhost.so 00:03:51.122 SYMLINK libspdk_ftl.so 00:03:51.380 CC module/env_dpdk/env_dpdk_rpc.o 00:03:51.638 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:51.638 CC module/accel/ioat/accel_ioat.o 00:03:51.638 CC module/scheduler/gscheduler/gscheduler.o 00:03:51.638 CC module/blob/bdev/blob_bdev.o 00:03:51.638 CC module/keyring/file/keyring.o 00:03:51.638 CC module/accel/dsa/accel_dsa.o 00:03:51.638 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:51.638 CC module/sock/posix/posix.o 00:03:51.639 CC module/accel/error/accel_error.o 00:03:51.639 LIB libspdk_env_dpdk_rpc.a 00:03:51.639 SO libspdk_env_dpdk_rpc.so.6.0 00:03:51.639 SYMLINK libspdk_env_dpdk_rpc.so 00:03:51.639 CC module/accel/dsa/accel_dsa_rpc.o 00:03:51.639 CC module/keyring/file/keyring_rpc.o 00:03:51.639 LIB libspdk_scheduler_dpdk_governor.a 00:03:51.639 LIB libspdk_scheduler_gscheduler.a 00:03:51.639 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:51.639 SO libspdk_scheduler_gscheduler.so.4.0 00:03:51.639 LIB libspdk_scheduler_dynamic.a 00:03:51.639 CC module/accel/ioat/accel_ioat_rpc.o 00:03:51.639 SO libspdk_scheduler_dynamic.so.4.0 00:03:51.639 CC module/accel/error/accel_error_rpc.o 00:03:51.898 SYMLINK libspdk_scheduler_gscheduler.so 00:03:51.898 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:51.898 SYMLINK libspdk_scheduler_dynamic.so 00:03:51.898 LIB libspdk_keyring_file.a 00:03:51.898 LIB libspdk_blob_bdev.a 00:03:51.898 LIB libspdk_accel_dsa.a 00:03:51.898 SO libspdk_blob_bdev.so.11.0 00:03:51.898 SO libspdk_accel_dsa.so.5.0 00:03:51.898 SO libspdk_keyring_file.so.1.0 00:03:51.898 LIB libspdk_accel_ioat.a 00:03:51.898 LIB libspdk_accel_error.a 00:03:51.898 SYMLINK libspdk_blob_bdev.so 00:03:51.898 SO libspdk_accel_ioat.so.6.0 00:03:51.898 SYMLINK libspdk_keyring_file.so 00:03:51.898 SYMLINK libspdk_accel_dsa.so 00:03:51.898 SO libspdk_accel_error.so.2.0 00:03:51.898 SYMLINK libspdk_accel_ioat.so 00:03:51.898 CC module/accel/iaa/accel_iaa.o 00:03:51.898 CC module/accel/iaa/accel_iaa_rpc.o 00:03:52.157 SYMLINK libspdk_accel_error.so 00:03:52.157 CC module/bdev/delay/vbdev_delay.o 00:03:52.157 CC module/bdev/error/vbdev_error.o 00:03:52.157 CC module/bdev/malloc/bdev_malloc.o 00:03:52.157 CC module/bdev/lvol/vbdev_lvol.o 00:03:52.157 CC module/bdev/gpt/gpt.o 00:03:52.157 CC module/blobfs/bdev/blobfs_bdev.o 00:03:52.157 CC module/bdev/null/bdev_null.o 00:03:52.157 LIB libspdk_accel_iaa.a 00:03:52.416 SO libspdk_accel_iaa.so.3.0 00:03:52.416 CC module/bdev/nvme/bdev_nvme.o 00:03:52.416 SYMLINK libspdk_accel_iaa.so 00:03:52.416 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:52.416 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:52.416 CC module/bdev/gpt/vbdev_gpt.o 00:03:52.416 LIB libspdk_sock_posix.a 00:03:52.416 SO libspdk_sock_posix.so.6.0 00:03:52.675 CC module/bdev/error/vbdev_error_rpc.o 00:03:52.675 LIB libspdk_blobfs_bdev.a 00:03:52.675 SO libspdk_blobfs_bdev.so.6.0 00:03:52.675 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:52.675 SYMLINK libspdk_sock_posix.so 00:03:52.675 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:52.675 CC module/bdev/null/bdev_null_rpc.o 00:03:52.675 CC module/bdev/nvme/nvme_rpc.o 00:03:52.675 SYMLINK libspdk_blobfs_bdev.so 00:03:52.675 CC module/bdev/nvme/bdev_mdns_client.o 00:03:52.675 LIB libspdk_bdev_gpt.a 00:03:52.675 LIB libspdk_bdev_error.a 00:03:52.933 SO libspdk_bdev_gpt.so.6.0 00:03:52.933 SO libspdk_bdev_error.so.6.0 00:03:52.933 LIB libspdk_bdev_malloc.a 00:03:52.933 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:52.933 LIB libspdk_bdev_delay.a 00:03:52.933 SYMLINK libspdk_bdev_gpt.so 00:03:52.934 SO libspdk_bdev_malloc.so.6.0 00:03:52.934 SO libspdk_bdev_delay.so.6.0 00:03:52.934 SYMLINK libspdk_bdev_error.so 00:03:52.934 CC module/bdev/nvme/vbdev_opal.o 00:03:52.934 LIB libspdk_bdev_null.a 00:03:52.934 SYMLINK libspdk_bdev_delay.so 00:03:52.934 SYMLINK libspdk_bdev_malloc.so 00:03:52.934 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:52.934 SO libspdk_bdev_null.so.6.0 00:03:52.934 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:53.192 CC module/bdev/passthru/vbdev_passthru.o 00:03:53.192 SYMLINK libspdk_bdev_null.so 00:03:53.192 CC module/bdev/raid/bdev_raid.o 00:03:53.192 CC module/bdev/split/vbdev_split.o 00:03:53.193 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:53.193 CC module/bdev/raid/bdev_raid_rpc.o 00:03:53.193 CC module/bdev/raid/bdev_raid_sb.o 00:03:53.193 CC module/bdev/split/vbdev_split_rpc.o 00:03:53.193 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:53.451 LIB libspdk_bdev_lvol.a 00:03:53.451 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:53.451 SO libspdk_bdev_lvol.so.6.0 00:03:53.451 LIB libspdk_bdev_passthru.a 00:03:53.451 LIB libspdk_bdev_split.a 00:03:53.451 SYMLINK libspdk_bdev_lvol.so 00:03:53.451 CC module/bdev/raid/raid0.o 00:03:53.451 SO libspdk_bdev_split.so.6.0 00:03:53.451 SO libspdk_bdev_passthru.so.6.0 00:03:53.451 CC module/bdev/raid/raid1.o 00:03:53.451 SYMLINK libspdk_bdev_split.so 00:03:53.451 SYMLINK libspdk_bdev_passthru.so 00:03:53.451 CC module/bdev/raid/concat.o 00:03:53.710 CC module/bdev/xnvme/bdev_xnvme.o 00:03:53.710 LIB libspdk_bdev_zone_block.a 00:03:53.710 CC module/bdev/aio/bdev_aio.o 00:03:53.710 SO libspdk_bdev_zone_block.so.6.0 00:03:53.710 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:53.710 CC module/bdev/ftl/bdev_ftl.o 00:03:53.710 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:53.710 SYMLINK libspdk_bdev_zone_block.so 00:03:53.969 CC module/bdev/iscsi/bdev_iscsi.o 00:03:53.969 CC module/bdev/aio/bdev_aio_rpc.o 00:03:53.969 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:53.969 LIB libspdk_bdev_xnvme.a 00:03:53.969 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:53.969 SO libspdk_bdev_xnvme.so.3.0 00:03:53.969 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:53.969 LIB libspdk_bdev_aio.a 00:03:53.969 SYMLINK libspdk_bdev_xnvme.so 00:03:53.969 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:54.228 SO libspdk_bdev_aio.so.6.0 00:03:54.228 LIB libspdk_bdev_ftl.a 00:03:54.228 SO libspdk_bdev_ftl.so.6.0 00:03:54.228 SYMLINK libspdk_bdev_aio.so 00:03:54.228 SYMLINK libspdk_bdev_ftl.so 00:03:54.228 LIB libspdk_bdev_iscsi.a 00:03:54.228 SO libspdk_bdev_iscsi.so.6.0 00:03:54.487 SYMLINK libspdk_bdev_iscsi.so 00:03:54.487 LIB libspdk_bdev_raid.a 00:03:54.487 SO libspdk_bdev_raid.so.6.0 00:03:54.487 SYMLINK libspdk_bdev_raid.so 00:03:54.746 LIB libspdk_bdev_virtio.a 00:03:54.746 SO libspdk_bdev_virtio.so.6.0 00:03:54.746 SYMLINK libspdk_bdev_virtio.so 00:03:55.313 LIB libspdk_bdev_nvme.a 00:03:55.313 SO libspdk_bdev_nvme.so.7.0 00:03:55.313 SYMLINK libspdk_bdev_nvme.so 00:03:55.880 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:55.880 CC module/event/subsystems/sock/sock.o 00:03:55.880 CC module/event/subsystems/vmd/vmd.o 00:03:55.880 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:55.880 CC module/event/subsystems/iobuf/iobuf.o 00:03:55.880 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:55.880 CC module/event/subsystems/scheduler/scheduler.o 00:03:55.880 CC module/event/subsystems/keyring/keyring.o 00:03:56.139 LIB libspdk_event_vhost_blk.a 00:03:56.139 LIB libspdk_event_keyring.a 00:03:56.139 LIB libspdk_event_sock.a 00:03:56.139 SO libspdk_event_vhost_blk.so.3.0 00:03:56.139 SO libspdk_event_keyring.so.1.0 00:03:56.139 SO libspdk_event_sock.so.5.0 00:03:56.139 LIB libspdk_event_vmd.a 00:03:56.139 LIB libspdk_event_scheduler.a 00:03:56.139 LIB libspdk_event_iobuf.a 00:03:56.139 SO libspdk_event_vmd.so.6.0 00:03:56.139 SO libspdk_event_scheduler.so.4.0 00:03:56.139 SO libspdk_event_iobuf.so.3.0 00:03:56.139 SYMLINK libspdk_event_sock.so 00:03:56.139 SYMLINK libspdk_event_vhost_blk.so 00:03:56.139 SYMLINK libspdk_event_keyring.so 00:03:56.139 SYMLINK libspdk_event_scheduler.so 00:03:56.139 SYMLINK libspdk_event_vmd.so 00:03:56.139 SYMLINK libspdk_event_iobuf.so 00:03:56.398 CC module/event/subsystems/accel/accel.o 00:03:56.657 LIB libspdk_event_accel.a 00:03:56.657 SO libspdk_event_accel.so.6.0 00:03:56.657 SYMLINK libspdk_event_accel.so 00:03:56.920 CC module/event/subsystems/bdev/bdev.o 00:03:57.178 LIB libspdk_event_bdev.a 00:03:57.178 SO libspdk_event_bdev.so.6.0 00:03:57.436 SYMLINK libspdk_event_bdev.so 00:03:57.694 CC module/event/subsystems/scsi/scsi.o 00:03:57.694 CC module/event/subsystems/ublk/ublk.o 00:03:57.694 CC module/event/subsystems/nbd/nbd.o 00:03:57.694 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:57.694 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:57.694 LIB libspdk_event_ublk.a 00:03:57.694 LIB libspdk_event_nbd.a 00:03:57.694 LIB libspdk_event_scsi.a 00:03:57.694 SO libspdk_event_ublk.so.3.0 00:03:57.694 SO libspdk_event_scsi.so.6.0 00:03:57.694 SO libspdk_event_nbd.so.6.0 00:03:57.952 SYMLINK libspdk_event_ublk.so 00:03:57.952 SYMLINK libspdk_event_scsi.so 00:03:57.952 SYMLINK libspdk_event_nbd.so 00:03:57.952 LIB libspdk_event_nvmf.a 00:03:57.952 SO libspdk_event_nvmf.so.6.0 00:03:57.952 SYMLINK libspdk_event_nvmf.so 00:03:58.210 CC module/event/subsystems/iscsi/iscsi.o 00:03:58.210 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:58.210 LIB libspdk_event_vhost_scsi.a 00:03:58.469 SO libspdk_event_vhost_scsi.so.3.0 00:03:58.469 LIB libspdk_event_iscsi.a 00:03:58.469 SO libspdk_event_iscsi.so.6.0 00:03:58.469 SYMLINK libspdk_event_vhost_scsi.so 00:03:58.469 SYMLINK libspdk_event_iscsi.so 00:03:58.726 SO libspdk.so.6.0 00:03:58.726 SYMLINK libspdk.so 00:03:58.985 CC app/trace_record/trace_record.o 00:03:58.985 CXX app/trace/trace.o 00:03:58.985 CC examples/vmd/lsvmd/lsvmd.o 00:03:58.985 CC examples/sock/hello_world/hello_sock.o 00:03:58.985 CC examples/ioat/perf/perf.o 00:03:58.985 CC examples/accel/perf/accel_perf.o 00:03:58.985 CC examples/bdev/hello_world/hello_bdev.o 00:03:58.985 CC examples/nvme/hello_world/hello_world.o 00:03:58.985 CC examples/blob/hello_world/hello_blob.o 00:03:58.985 CC test/accel/dif/dif.o 00:03:59.243 LINK lsvmd 00:03:59.243 LINK spdk_trace_record 00:03:59.244 LINK hello_bdev 00:03:59.244 LINK hello_world 00:03:59.244 LINK ioat_perf 00:03:59.244 LINK hello_blob 00:03:59.502 LINK hello_sock 00:03:59.502 LINK spdk_trace 00:03:59.502 CC examples/vmd/led/led.o 00:03:59.502 CC examples/ioat/verify/verify.o 00:03:59.760 LINK accel_perf 00:03:59.760 LINK dif 00:03:59.760 CC examples/nvme/reconnect/reconnect.o 00:03:59.760 CC examples/blob/cli/blobcli.o 00:03:59.760 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.760 LINK led 00:03:59.760 CC examples/bdev/bdevperf/bdevperf.o 00:03:59.760 LINK verify 00:04:00.017 CC examples/nvmf/nvmf/nvmf.o 00:04:00.017 CC app/nvmf_tgt/nvmf_main.o 00:04:00.017 LINK reconnect 00:04:00.017 LINK nvmf_tgt 00:04:00.017 CC examples/util/zipf/zipf.o 00:04:00.295 CC test/app/bdev_svc/bdev_svc.o 00:04:00.295 CC examples/thread/thread/thread_ex.o 00:04:00.295 CC examples/idxd/perf/perf.o 00:04:00.295 LINK nvmf 00:04:00.295 LINK blobcli 00:04:00.295 LINK zipf 00:04:00.295 LINK nvme_manage 00:04:00.295 LINK bdev_svc 00:04:00.562 CC app/iscsi_tgt/iscsi_tgt.o 00:04:00.562 CC test/bdev/bdevio/bdevio.o 00:04:00.562 LINK thread 00:04:00.562 CC examples/nvme/arbitration/arbitration.o 00:04:00.562 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:00.562 CC examples/nvme/hotplug/hotplug.o 00:04:00.562 LINK idxd_perf 00:04:00.821 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:00.821 LINK bdevperf 00:04:00.821 LINK iscsi_tgt 00:04:00.821 LINK interrupt_tgt 00:04:00.821 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:00.821 LINK cmb_copy 00:04:00.821 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:00.821 LINK hotplug 00:04:01.080 CC test/blobfs/mkfs/mkfs.o 00:04:01.080 LINK bdevio 00:04:01.080 CC test/app/histogram_perf/histogram_perf.o 00:04:01.080 LINK arbitration 00:04:01.338 CC test/app/jsoncat/jsoncat.o 00:04:01.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:01.338 CC test/app/stub/stub.o 00:04:01.338 LINK mkfs 00:04:01.338 CC app/spdk_tgt/spdk_tgt.o 00:04:01.338 LINK histogram_perf 00:04:01.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:01.338 LINK jsoncat 00:04:01.338 CC app/spdk_lspci/spdk_lspci.o 00:04:01.338 LINK stub 00:04:01.597 CC examples/nvme/abort/abort.o 00:04:01.597 LINK spdk_tgt 00:04:01.597 LINK spdk_lspci 00:04:01.597 TEST_HEADER include/spdk/accel.h 00:04:01.597 TEST_HEADER include/spdk/accel_module.h 00:04:01.597 LINK nvme_fuzz 00:04:01.597 TEST_HEADER include/spdk/assert.h 00:04:01.597 TEST_HEADER include/spdk/barrier.h 00:04:01.597 TEST_HEADER include/spdk/base64.h 00:04:01.597 TEST_HEADER include/spdk/bdev.h 00:04:01.597 TEST_HEADER include/spdk/bdev_module.h 00:04:01.597 TEST_HEADER include/spdk/bdev_zone.h 00:04:01.597 TEST_HEADER include/spdk/bit_array.h 00:04:01.597 TEST_HEADER include/spdk/bit_pool.h 00:04:01.597 TEST_HEADER include/spdk/blob_bdev.h 00:04:01.597 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:01.597 TEST_HEADER include/spdk/blobfs.h 00:04:01.597 TEST_HEADER include/spdk/blob.h 00:04:01.597 TEST_HEADER include/spdk/conf.h 00:04:01.597 TEST_HEADER include/spdk/config.h 00:04:01.597 TEST_HEADER include/spdk/cpuset.h 00:04:01.597 TEST_HEADER include/spdk/crc16.h 00:04:01.597 TEST_HEADER include/spdk/crc32.h 00:04:01.597 TEST_HEADER include/spdk/crc64.h 00:04:01.597 TEST_HEADER include/spdk/dif.h 00:04:01.597 TEST_HEADER include/spdk/dma.h 00:04:01.597 TEST_HEADER include/spdk/endian.h 00:04:01.597 TEST_HEADER include/spdk/env_dpdk.h 00:04:01.597 TEST_HEADER include/spdk/env.h 00:04:01.597 TEST_HEADER include/spdk/event.h 00:04:01.597 TEST_HEADER include/spdk/fd_group.h 00:04:01.597 TEST_HEADER include/spdk/fd.h 00:04:01.597 TEST_HEADER include/spdk/file.h 00:04:01.597 TEST_HEADER include/spdk/ftl.h 00:04:01.597 TEST_HEADER include/spdk/gpt_spec.h 00:04:01.597 TEST_HEADER include/spdk/hexlify.h 00:04:01.597 TEST_HEADER include/spdk/histogram_data.h 00:04:01.597 TEST_HEADER include/spdk/idxd.h 00:04:01.597 TEST_HEADER include/spdk/idxd_spec.h 00:04:01.597 TEST_HEADER include/spdk/init.h 00:04:01.597 TEST_HEADER include/spdk/ioat.h 00:04:01.597 TEST_HEADER include/spdk/ioat_spec.h 00:04:01.597 TEST_HEADER include/spdk/iscsi_spec.h 00:04:01.597 TEST_HEADER include/spdk/json.h 00:04:01.597 TEST_HEADER include/spdk/jsonrpc.h 00:04:01.597 TEST_HEADER include/spdk/keyring.h 00:04:01.856 TEST_HEADER include/spdk/keyring_module.h 00:04:01.856 TEST_HEADER include/spdk/likely.h 00:04:01.856 TEST_HEADER include/spdk/log.h 00:04:01.856 TEST_HEADER include/spdk/lvol.h 00:04:01.856 CC test/env/vtophys/vtophys.o 00:04:01.856 TEST_HEADER include/spdk/memory.h 00:04:01.856 TEST_HEADER include/spdk/mmio.h 00:04:01.856 TEST_HEADER include/spdk/nbd.h 00:04:01.856 TEST_HEADER include/spdk/notify.h 00:04:01.856 TEST_HEADER include/spdk/nvme.h 00:04:01.856 CC test/dma/test_dma/test_dma.o 00:04:01.856 TEST_HEADER include/spdk/nvme_intel.h 00:04:01.856 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:01.856 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:01.856 TEST_HEADER include/spdk/nvme_spec.h 00:04:01.856 TEST_HEADER include/spdk/nvme_zns.h 00:04:01.856 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:01.856 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:01.856 TEST_HEADER include/spdk/nvmf.h 00:04:01.856 TEST_HEADER include/spdk/nvmf_spec.h 00:04:01.856 TEST_HEADER include/spdk/nvmf_transport.h 00:04:01.856 TEST_HEADER include/spdk/opal.h 00:04:01.856 TEST_HEADER include/spdk/opal_spec.h 00:04:01.856 TEST_HEADER include/spdk/pci_ids.h 00:04:01.856 TEST_HEADER include/spdk/pipe.h 00:04:01.856 TEST_HEADER include/spdk/queue.h 00:04:01.856 TEST_HEADER include/spdk/reduce.h 00:04:01.856 TEST_HEADER include/spdk/rpc.h 00:04:01.856 TEST_HEADER include/spdk/scheduler.h 00:04:01.856 TEST_HEADER include/spdk/scsi.h 00:04:01.856 TEST_HEADER include/spdk/scsi_spec.h 00:04:01.856 TEST_HEADER include/spdk/sock.h 00:04:01.856 TEST_HEADER include/spdk/stdinc.h 00:04:01.856 TEST_HEADER include/spdk/string.h 00:04:01.856 CC test/env/mem_callbacks/mem_callbacks.o 00:04:01.856 TEST_HEADER include/spdk/thread.h 00:04:01.856 TEST_HEADER include/spdk/trace.h 00:04:01.856 TEST_HEADER include/spdk/trace_parser.h 00:04:01.856 TEST_HEADER include/spdk/tree.h 00:04:01.856 TEST_HEADER include/spdk/ublk.h 00:04:01.856 TEST_HEADER include/spdk/util.h 00:04:01.856 TEST_HEADER include/spdk/uuid.h 00:04:01.856 TEST_HEADER include/spdk/version.h 00:04:01.856 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:01.856 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:01.856 TEST_HEADER include/spdk/vhost.h 00:04:01.856 TEST_HEADER include/spdk/vmd.h 00:04:01.856 TEST_HEADER include/spdk/xor.h 00:04:01.856 TEST_HEADER include/spdk/zipf.h 00:04:01.856 CXX test/cpp_headers/accel.o 00:04:01.856 LINK vhost_fuzz 00:04:01.856 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.856 CC test/env/memory/memory_ut.o 00:04:01.856 LINK vtophys 00:04:01.856 CC app/spdk_nvme_perf/perf.o 00:04:01.856 LINK abort 00:04:02.114 CXX test/cpp_headers/accel_module.o 00:04:02.114 LINK env_dpdk_post_init 00:04:02.114 CC app/spdk_nvme_identify/identify.o 00:04:02.114 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.114 CXX test/cpp_headers/assert.o 00:04:02.114 LINK test_dma 00:04:02.372 CC test/event/event_perf/event_perf.o 00:04:02.372 LINK pmr_persistence 00:04:02.372 CXX test/cpp_headers/barrier.o 00:04:02.372 LINK mem_callbacks 00:04:02.372 CC test/event/reactor/reactor.o 00:04:02.631 CXX test/cpp_headers/base64.o 00:04:02.631 LINK event_perf 00:04:02.631 LINK reactor 00:04:02.631 CC test/rpc_client/rpc_client_test.o 00:04:02.890 CXX test/cpp_headers/bdev.o 00:04:02.890 CC test/nvme/aer/aer.o 00:04:02.890 CXX test/cpp_headers/bdev_module.o 00:04:02.890 CC test/lvol/esnap/esnap.o 00:04:02.890 LINK memory_ut 00:04:02.890 LINK spdk_nvme_perf 00:04:03.149 CC test/event/reactor_perf/reactor_perf.o 00:04:03.149 LINK rpc_client_test 00:04:03.149 CXX test/cpp_headers/bdev_zone.o 00:04:03.149 CC test/event/app_repeat/app_repeat.o 00:04:03.149 CXX test/cpp_headers/bit_array.o 00:04:03.149 LINK aer 00:04:03.149 LINK reactor_perf 00:04:03.408 LINK iscsi_fuzz 00:04:03.408 CC test/event/scheduler/scheduler.o 00:04:03.408 CC test/env/pci/pci_ut.o 00:04:03.408 CXX test/cpp_headers/bit_pool.o 00:04:03.408 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.408 LINK app_repeat 00:04:03.408 LINK spdk_nvme_identify 00:04:03.408 CC test/nvme/sgl/sgl.o 00:04:03.668 CC test/nvme/reset/reset.o 00:04:03.668 CXX test/cpp_headers/blob_bdev.o 00:04:03.668 LINK scheduler 00:04:03.668 CXX test/cpp_headers/blobfs_bdev.o 00:04:03.668 LINK spdk_nvme_discover 00:04:03.925 CC test/thread/poller_perf/poller_perf.o 00:04:03.925 LINK sgl 00:04:03.925 CC app/spdk_top/spdk_top.o 00:04:03.925 CXX test/cpp_headers/blobfs.o 00:04:03.925 CC app/vhost/vhost.o 00:04:03.925 LINK pci_ut 00:04:03.925 CC app/spdk_dd/spdk_dd.o 00:04:03.925 LINK reset 00:04:03.925 LINK poller_perf 00:04:04.183 CXX test/cpp_headers/blob.o 00:04:04.183 CC app/fio/nvme/fio_plugin.o 00:04:04.183 CC test/nvme/e2edp/nvme_dp.o 00:04:04.183 LINK vhost 00:04:04.183 CXX test/cpp_headers/conf.o 00:04:04.183 CXX test/cpp_headers/config.o 00:04:04.183 CXX test/cpp_headers/cpuset.o 00:04:04.183 CXX test/cpp_headers/crc16.o 00:04:04.442 CC test/nvme/overhead/overhead.o 00:04:04.442 LINK spdk_dd 00:04:04.442 CXX test/cpp_headers/crc32.o 00:04:04.442 CXX test/cpp_headers/crc64.o 00:04:04.442 CC test/nvme/err_injection/err_injection.o 00:04:04.442 LINK nvme_dp 00:04:04.442 CC app/fio/bdev/fio_plugin.o 00:04:04.702 CXX test/cpp_headers/dif.o 00:04:04.702 CXX test/cpp_headers/dma.o 00:04:04.702 CXX test/cpp_headers/endian.o 00:04:04.702 LINK err_injection 00:04:04.702 LINK overhead 00:04:04.960 CC test/nvme/startup/startup.o 00:04:04.960 LINK spdk_nvme 00:04:04.960 CXX test/cpp_headers/env_dpdk.o 00:04:04.960 CXX test/cpp_headers/env.o 00:04:04.960 CC test/nvme/reserve/reserve.o 00:04:04.960 CXX test/cpp_headers/event.o 00:04:04.960 CC test/nvme/simple_copy/simple_copy.o 00:04:04.960 LINK startup 00:04:04.960 CC test/nvme/connect_stress/connect_stress.o 00:04:05.220 CC test/nvme/boot_partition/boot_partition.o 00:04:05.220 CC test/nvme/compliance/nvme_compliance.o 00:04:05.220 LINK spdk_top 00:04:05.220 CXX test/cpp_headers/fd_group.o 00:04:05.220 LINK reserve 00:04:05.220 LINK spdk_bdev 00:04:05.220 LINK connect_stress 00:04:05.220 LINK boot_partition 00:04:05.220 LINK simple_copy 00:04:05.220 CC test/nvme/fused_ordering/fused_ordering.o 00:04:05.479 CXX test/cpp_headers/fd.o 00:04:05.479 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:05.479 CC test/nvme/fdp/fdp.o 00:04:05.479 CXX test/cpp_headers/file.o 00:04:05.479 CC test/nvme/cuse/cuse.o 00:04:05.479 CXX test/cpp_headers/ftl.o 00:04:05.479 CXX test/cpp_headers/gpt_spec.o 00:04:05.479 CXX test/cpp_headers/hexlify.o 00:04:05.479 LINK fused_ordering 00:04:05.738 CXX test/cpp_headers/histogram_data.o 00:04:05.738 LINK doorbell_aers 00:04:05.738 CXX test/cpp_headers/idxd.o 00:04:05.738 CXX test/cpp_headers/idxd_spec.o 00:04:05.738 LINK nvme_compliance 00:04:05.738 CXX test/cpp_headers/init.o 00:04:05.738 CXX test/cpp_headers/ioat.o 00:04:05.738 CXX test/cpp_headers/ioat_spec.o 00:04:05.738 CXX test/cpp_headers/iscsi_spec.o 00:04:05.996 LINK fdp 00:04:05.996 CXX test/cpp_headers/json.o 00:04:05.996 CXX test/cpp_headers/jsonrpc.o 00:04:05.996 CXX test/cpp_headers/keyring.o 00:04:05.996 CXX test/cpp_headers/keyring_module.o 00:04:05.996 CXX test/cpp_headers/likely.o 00:04:05.996 CXX test/cpp_headers/log.o 00:04:05.996 CXX test/cpp_headers/lvol.o 00:04:05.996 CXX test/cpp_headers/memory.o 00:04:05.996 CXX test/cpp_headers/mmio.o 00:04:06.255 CXX test/cpp_headers/nbd.o 00:04:06.255 CXX test/cpp_headers/notify.o 00:04:06.255 CXX test/cpp_headers/nvme.o 00:04:06.255 CXX test/cpp_headers/nvme_intel.o 00:04:06.255 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:06.255 CXX test/cpp_headers/nvme_ocssd.o 00:04:06.255 CXX test/cpp_headers/nvme_spec.o 00:04:06.255 CXX test/cpp_headers/nvme_zns.o 00:04:06.255 CXX test/cpp_headers/nvmf_cmd.o 00:04:06.255 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:06.255 CXX test/cpp_headers/nvmf.o 00:04:06.624 CXX test/cpp_headers/nvmf_spec.o 00:04:06.624 CXX test/cpp_headers/nvmf_transport.o 00:04:06.624 CXX test/cpp_headers/opal.o 00:04:06.624 CXX test/cpp_headers/opal_spec.o 00:04:06.624 CXX test/cpp_headers/pci_ids.o 00:04:06.624 CXX test/cpp_headers/pipe.o 00:04:06.624 CXX test/cpp_headers/queue.o 00:04:06.624 CXX test/cpp_headers/reduce.o 00:04:06.624 CXX test/cpp_headers/rpc.o 00:04:06.624 CXX test/cpp_headers/scheduler.o 00:04:06.624 CXX test/cpp_headers/scsi.o 00:04:06.624 CXX test/cpp_headers/scsi_spec.o 00:04:06.624 CXX test/cpp_headers/sock.o 00:04:06.624 CXX test/cpp_headers/stdinc.o 00:04:06.624 CXX test/cpp_headers/string.o 00:04:06.883 LINK cuse 00:04:06.883 CXX test/cpp_headers/thread.o 00:04:06.883 CXX test/cpp_headers/trace.o 00:04:06.883 CXX test/cpp_headers/trace_parser.o 00:04:06.883 CXX test/cpp_headers/tree.o 00:04:06.883 CXX test/cpp_headers/ublk.o 00:04:06.883 CXX test/cpp_headers/util.o 00:04:06.883 CXX test/cpp_headers/uuid.o 00:04:06.883 CXX test/cpp_headers/version.o 00:04:06.883 CXX test/cpp_headers/vfio_user_pci.o 00:04:06.883 CXX test/cpp_headers/vfio_user_spec.o 00:04:06.883 CXX test/cpp_headers/vhost.o 00:04:06.883 CXX test/cpp_headers/vmd.o 00:04:06.883 CXX test/cpp_headers/xor.o 00:04:06.883 CXX test/cpp_headers/zipf.o 00:04:09.414 LINK esnap 00:04:09.982 00:04:09.982 real 1m17.163s 00:04:09.982 user 7m34.370s 00:04:09.982 sys 1m41.714s 00:04:09.982 17:56:02 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:09.982 ************************************ 00:04:09.982 END TEST make 00:04:09.982 ************************************ 00:04:09.982 17:56:02 make -- common/autotest_common.sh@10 -- $ set +x 00:04:09.982 17:56:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:09.982 17:56:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:09.982 17:56:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:09.982 17:56:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.982 17:56:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:09.982 17:56:02 -- pm/common@44 -- $ pid=5167 00:04:09.982 17:56:02 -- pm/common@50 -- $ kill -TERM 5167 00:04:09.982 17:56:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.982 17:56:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:09.982 17:56:02 -- pm/common@44 -- $ pid=5169 00:04:09.982 17:56:02 -- pm/common@50 -- $ kill -TERM 5169 00:04:09.982 17:56:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:09.983 17:56:02 -- nvmf/common.sh@7 -- # uname -s 00:04:09.983 17:56:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.983 17:56:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.983 17:56:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.983 17:56:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.983 17:56:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.983 17:56:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.983 17:56:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.983 17:56:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.983 17:56:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.983 17:56:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.983 17:56:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d392595-b32d-4fb6-a9ae-a7286ece9269 00:04:09.983 17:56:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d392595-b32d-4fb6-a9ae-a7286ece9269 00:04:09.983 17:56:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.983 17:56:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.983 17:56:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.983 17:56:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.983 17:56:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:09.983 17:56:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.983 17:56:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.983 17:56:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.983 17:56:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.983 17:56:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.983 17:56:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.983 17:56:02 -- paths/export.sh@5 -- # export PATH 00:04:09.983 17:56:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.983 17:56:02 -- nvmf/common.sh@47 -- # : 0 00:04:09.983 17:56:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:09.983 17:56:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:09.983 17:56:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.983 17:56:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.983 17:56:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.983 17:56:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:09.983 17:56:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:09.983 17:56:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:09.983 17:56:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:09.983 17:56:02 -- spdk/autotest.sh@32 -- # uname -s 00:04:09.983 17:56:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:09.983 17:56:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:09.983 17:56:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:10.242 17:56:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:10.242 17:56:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:10.242 17:56:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:10.242 17:56:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:10.242 17:56:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:10.242 17:56:02 -- spdk/autotest.sh@48 -- # udevadm_pid=53072 00:04:10.242 17:56:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:10.242 17:56:02 -- pm/common@17 -- # local monitor 00:04:10.242 17:56:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.242 17:56:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:10.242 17:56:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.242 17:56:02 -- pm/common@21 -- # date +%s 00:04:10.242 17:56:02 -- pm/common@25 -- # sleep 1 00:04:10.242 17:56:02 -- pm/common@21 -- # date +%s 00:04:10.242 17:56:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715795762 00:04:10.242 17:56:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715795762 00:04:10.242 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715795762_collect-cpu-load.pm.log 00:04:10.242 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715795762_collect-vmstat.pm.log 00:04:11.178 17:56:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:11.178 17:56:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:11.178 17:56:03 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:11.178 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:04:11.178 17:56:03 -- spdk/autotest.sh@59 -- # create_test_list 00:04:11.178 17:56:03 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:11.178 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:04:11.178 17:56:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:11.178 17:56:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:11.178 17:56:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:11.178 17:56:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:11.178 17:56:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:11.178 17:56:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:11.178 17:56:03 -- common/autotest_common.sh@1451 -- # uname 00:04:11.178 17:56:03 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:11.178 17:56:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:11.178 17:56:03 -- common/autotest_common.sh@1471 -- # uname 00:04:11.178 17:56:03 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:11.178 17:56:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:11.178 17:56:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:11.178 17:56:03 -- spdk/autotest.sh@72 -- # hash lcov 00:04:11.178 17:56:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:11.178 17:56:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:11.178 --rc lcov_branch_coverage=1 00:04:11.178 --rc lcov_function_coverage=1 00:04:11.178 --rc genhtml_branch_coverage=1 00:04:11.178 --rc genhtml_function_coverage=1 00:04:11.178 --rc genhtml_legend=1 00:04:11.178 --rc geninfo_all_blocks=1 00:04:11.178 ' 00:04:11.178 17:56:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:11.178 --rc lcov_branch_coverage=1 00:04:11.178 --rc lcov_function_coverage=1 00:04:11.178 --rc genhtml_branch_coverage=1 00:04:11.178 --rc genhtml_function_coverage=1 00:04:11.178 --rc genhtml_legend=1 00:04:11.178 --rc geninfo_all_blocks=1 00:04:11.178 ' 00:04:11.178 17:56:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:11.178 --rc lcov_branch_coverage=1 00:04:11.178 --rc lcov_function_coverage=1 00:04:11.178 --rc genhtml_branch_coverage=1 00:04:11.178 --rc genhtml_function_coverage=1 00:04:11.178 --rc genhtml_legend=1 00:04:11.178 --rc geninfo_all_blocks=1 00:04:11.178 --no-external' 00:04:11.178 17:56:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:11.178 --rc lcov_branch_coverage=1 00:04:11.178 --rc lcov_function_coverage=1 00:04:11.178 --rc genhtml_branch_coverage=1 00:04:11.178 --rc genhtml_function_coverage=1 00:04:11.178 --rc genhtml_legend=1 00:04:11.178 --rc geninfo_all_blocks=1 00:04:11.178 --no-external' 00:04:11.178 17:56:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:11.436 lcov: LCOV version 1.14 00:04:11.436 17:56:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:21.405 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:21.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:21.405 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:21.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:21.405 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:21.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:28.017 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:28.018 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:43.016 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:43.016 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:45.550 17:56:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:45.550 17:56:37 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:45.550 17:56:37 -- common/autotest_common.sh@10 -- # set +x 00:04:45.550 17:56:37 -- spdk/autotest.sh@91 -- # rm -f 00:04:45.550 17:56:37 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.116 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:46.116 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:46.116 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:46.116 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:46.116 17:56:38 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:46.116 17:56:38 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:46.116 17:56:38 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:46.116 17:56:38 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:46.116 17:56:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:46.116 17:56:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:46.116 17:56:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:46.116 17:56:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:46.116 17:56:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:04:46.116 17:56:38 -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:04:46.116 17:56:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:46.116 17:56:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:04:46.116 17:56:38 -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:04:46.116 17:56:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:46.116 17:56:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:46.116 17:56:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:04:46.116 17:56:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:46.116 17:56:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:46.116 17:56:38 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:46.116 17:56:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.116 17:56:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:46.116 17:56:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:46.116 17:56:38 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:46.116 17:56:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:46.116 No valid GPT data, bailing 00:04:46.116 17:56:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.116 17:56:38 -- scripts/common.sh@391 -- # pt= 00:04:46.116 17:56:38 -- scripts/common.sh@392 -- # return 1 00:04:46.116 17:56:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:46.116 1+0 records in 00:04:46.116 1+0 records out 00:04:46.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113876 s, 92.1 MB/s 00:04:46.116 17:56:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.116 17:56:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:46.116 17:56:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:46.116 17:56:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:46.116 17:56:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:46.375 No valid GPT data, bailing 00:04:46.375 17:56:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:46.375 17:56:38 -- scripts/common.sh@391 -- # pt= 00:04:46.375 17:56:38 -- scripts/common.sh@392 -- # return 1 00:04:46.375 17:56:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:46.375 1+0 records in 00:04:46.375 1+0 records out 00:04:46.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379219 s, 277 MB/s 00:04:46.375 17:56:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.375 17:56:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:46.375 17:56:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:46.375 17:56:38 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:46.375 17:56:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:46.375 No valid GPT data, bailing 00:04:46.375 17:56:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:46.375 17:56:38 -- scripts/common.sh@391 -- # pt= 00:04:46.375 17:56:38 -- scripts/common.sh@392 -- # return 1 00:04:46.375 17:56:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:46.375 1+0 records in 00:04:46.375 1+0 records out 00:04:46.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00353188 s, 297 MB/s 00:04:46.375 17:56:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.375 17:56:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:46.375 17:56:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:46.375 17:56:38 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:46.375 17:56:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:46.375 No valid GPT data, bailing 00:04:46.375 17:56:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:46.375 17:56:38 -- scripts/common.sh@391 -- # pt= 00:04:46.375 17:56:38 -- scripts/common.sh@392 -- # return 1 00:04:46.375 17:56:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:46.375 1+0 records in 00:04:46.375 1+0 records out 00:04:46.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369443 s, 284 MB/s 00:04:46.375 17:56:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.375 17:56:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:46.375 17:56:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:46.375 17:56:38 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:46.375 17:56:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:46.633 No valid GPT data, bailing 00:04:46.633 17:56:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:46.633 17:56:38 -- scripts/common.sh@391 -- # pt= 00:04:46.633 17:56:38 -- scripts/common.sh@392 -- # return 1 00:04:46.633 17:56:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:46.633 1+0 records in 00:04:46.633 1+0 records out 00:04:46.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00301218 s, 348 MB/s 00:04:46.633 17:56:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.633 17:56:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:46.633 17:56:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:46.633 17:56:38 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:46.633 17:56:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:46.633 No valid GPT data, bailing 00:04:46.633 17:56:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:46.633 17:56:39 -- scripts/common.sh@391 -- # pt= 00:04:46.633 17:56:39 -- scripts/common.sh@392 -- # return 1 00:04:46.633 17:56:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:46.633 1+0 records in 00:04:46.633 1+0 records out 00:04:46.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00337987 s, 310 MB/s 00:04:46.633 17:56:39 -- spdk/autotest.sh@118 -- # sync 00:04:46.633 17:56:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:46.633 17:56:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:46.633 17:56:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:48.006 17:56:40 -- spdk/autotest.sh@124 -- # uname -s 00:04:48.006 17:56:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:48.006 17:56:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:48.006 17:56:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.006 17:56:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.006 17:56:40 -- common/autotest_common.sh@10 -- # set +x 00:04:48.006 ************************************ 00:04:48.006 START TEST setup.sh 00:04:48.006 ************************************ 00:04:48.006 17:56:40 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:48.006 * Looking for test storage... 00:04:48.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.006 17:56:40 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:48.006 17:56:40 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:48.006 17:56:40 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:48.006 17:56:40 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.006 17:56:40 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.006 17:56:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.006 ************************************ 00:04:48.006 START TEST acl 00:04:48.006 ************************************ 00:04:48.006 17:56:40 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:48.265 * Looking for test storage... 00:04:48.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.265 17:56:40 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:48.265 17:56:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:48.265 17:56:40 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:48.265 17:56:40 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:48.265 17:56:40 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:48.265 17:56:40 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:48.265 17:56:40 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:48.265 17:56:40 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.265 17:56:40 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.211 17:56:41 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:49.211 17:56:41 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:49.211 17:56:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.211 17:56:41 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:49.211 17:56:41 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.211 17:56:41 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:49.791 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:49.791 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:49.791 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.357 Hugepages 00:04:50.357 node hugesize free / total 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.357 00:04:50.357 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.357 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:50.614 17:56:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:50.614 17:56:42 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.614 17:56:42 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.614 17:56:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:50.614 ************************************ 00:04:50.614 START TEST denied 00:04:50.614 ************************************ 00:04:50.614 17:56:42 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:50.614 17:56:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:50.614 17:56:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:50.614 17:56:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:50.614 17:56:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.614 17:56:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.986 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.986 17:56:44 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.551 00:04:58.551 real 0m7.194s 00:04:58.551 user 0m0.841s 00:04:58.551 sys 0m1.398s 00:04:58.551 17:56:50 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.551 17:56:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:58.551 ************************************ 00:04:58.551 END TEST denied 00:04:58.551 ************************************ 00:04:58.551 17:56:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:58.551 17:56:50 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.551 17:56:50 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.551 17:56:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:58.551 ************************************ 00:04:58.551 START TEST allowed 00:04:58.551 ************************************ 00:04:58.551 17:56:50 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:58.551 17:56:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:58.551 17:56:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:58.551 17:56:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.551 17:56:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.551 17:56:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:59.120 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.120 17:56:51 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.097 00:05:00.097 real 0m2.321s 00:05:00.097 user 0m1.061s 00:05:00.097 sys 0m1.259s 00:05:00.097 17:56:52 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.097 ************************************ 00:05:00.097 17:56:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 END TEST allowed 00:05:00.097 ************************************ 00:05:00.097 00:05:00.097 real 0m12.123s 00:05:00.097 user 0m3.086s 00:05:00.097 sys 0m4.076s 00:05:00.097 17:56:52 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.097 17:56:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:00.097 ************************************ 00:05:00.097 END TEST acl 00:05:00.097 ************************************ 00:05:00.358 17:56:52 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:00.358 17:56:52 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.358 17:56:52 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.358 17:56:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:00.358 ************************************ 00:05:00.358 START TEST hugepages 00:05:00.358 ************************************ 00:05:00.358 17:56:52 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:00.358 * Looking for test storage... 00:05:00.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5409384 kB' 'MemAvailable: 7398828 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 838200 kB' 'Inactive: 1466648 kB' 'Active(anon): 111792 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 103180 kB' 'Mapped: 48680 kB' 'Shmem: 10512 kB' 'KReclaimable: 66656 kB' 'Slab: 147816 kB' 'SReclaimable: 66656 kB' 'SUnreclaim: 81160 kB' 'KernelStack: 6288 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.359 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.360 17:56:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:00.360 17:56:52 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.360 17:56:52 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.360 17:56:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.360 ************************************ 00:05:00.360 START TEST default_setup 00:05:00.360 ************************************ 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.360 17:56:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.497 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.497 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.497 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.763 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7529996 kB' 'MemAvailable: 9519188 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 856512 kB' 'Inactive: 1466672 kB' 'Active(anon): 130104 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121168 kB' 'Mapped: 48860 kB' 'Shmem: 10476 kB' 'KReclaimable: 66100 kB' 'Slab: 146840 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80740 kB' 'KernelStack: 6256 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.764 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7529748 kB' 'MemAvailable: 9518944 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 856048 kB' 'Inactive: 1466676 kB' 'Active(anon): 129640 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 120704 kB' 'Mapped: 48740 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146828 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80728 kB' 'KernelStack: 6240 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.765 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7529748 kB' 'MemAvailable: 9518944 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 856148 kB' 'Inactive: 1466676 kB' 'Active(anon): 129740 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 120892 kB' 'Mapped: 49000 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146836 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80736 kB' 'KernelStack: 6256 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 347788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.767 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.768 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.769 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:01.770 nr_hugepages=1024 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.770 resv_hugepages=0 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.770 surplus_hugepages=0 00:05:01.770 anon_hugepages=0 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7529748 kB' 'MemAvailable: 9518952 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 855956 kB' 'Inactive: 1466684 kB' 'Active(anon): 129548 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 120640 kB' 'Mapped: 48740 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146828 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80728 kB' 'KernelStack: 6160 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.770 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.771 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7529748 kB' 'MemUsed: 4712232 kB' 'SwapCached: 0 kB' 'Active: 856044 kB' 'Inactive: 1466684 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'FilePages: 2203564 kB' 'Mapped: 48740 kB' 'AnonPages: 120736 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66100 kB' 'Slab: 146820 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.772 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.773 node0=1024 expecting 1024 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.773 00:05:01.773 real 0m1.447s 00:05:01.773 user 0m0.613s 00:05:01.773 sys 0m0.801s 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.773 17:56:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:01.773 ************************************ 00:05:01.773 END TEST default_setup 00:05:01.773 ************************************ 00:05:02.033 17:56:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:02.033 17:56:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.033 17:56:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.033 17:56:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.033 ************************************ 00:05:02.033 START TEST per_node_1G_alloc 00:05:02.033 ************************************ 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.033 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.557 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.557 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.557 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.557 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.557 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:02.557 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:02.557 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.557 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.557 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8572932 kB' 'MemAvailable: 10562136 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 856304 kB' 'Inactive: 1466684 kB' 'Active(anon): 129896 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 121256 kB' 'Mapped: 48868 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146864 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80764 kB' 'KernelStack: 6232 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8573748 kB' 'MemAvailable: 10562952 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 855828 kB' 'Inactive: 1466684 kB' 'Active(anon): 129420 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 120788 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146896 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80796 kB' 'KernelStack: 6240 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8573748 kB' 'MemAvailable: 10562952 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 855848 kB' 'Inactive: 1466684 kB' 'Active(anon): 129440 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 120824 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146900 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80800 kB' 'KernelStack: 6256 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.561 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.563 nr_hugepages=512 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:02.563 resv_hugepages=0 00:05:02.563 surplus_hugepages=0 00:05:02.563 anon_hugepages=0 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8575580 kB' 'MemAvailable: 10564784 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 855776 kB' 'Inactive: 1466684 kB' 'Active(anon): 129368 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 120728 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146884 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80784 kB' 'KernelStack: 6240 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.564 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.565 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8575328 kB' 'MemUsed: 3666652 kB' 'SwapCached: 0 kB' 'Active: 855840 kB' 'Inactive: 1466684 kB' 'Active(anon): 129432 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'FilePages: 2203564 kB' 'Mapped: 48680 kB' 'AnonPages: 120548 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66100 kB' 'Slab: 146884 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.566 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.567 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:02.568 node0=512 expecting 512 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:02.568 00:05:02.568 real 0m0.737s 00:05:02.568 user 0m0.302s 00:05:02.568 sys 0m0.456s 00:05:02.568 ************************************ 00:05:02.568 END TEST per_node_1G_alloc 00:05:02.568 ************************************ 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.568 17:56:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.827 17:56:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:02.827 17:56:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.827 17:56:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.827 17:56:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.827 ************************************ 00:05:02.827 START TEST even_2G_alloc 00:05:02.827 ************************************ 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.827 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.117 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.117 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.117 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.117 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7525144 kB' 'MemAvailable: 9514344 kB' 'Buffers: 2436 kB' 'Cached: 2201124 kB' 'SwapCached: 0 kB' 'Active: 856672 kB' 'Inactive: 1466680 kB' 'Active(anon): 130264 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121076 kB' 'Mapped: 48808 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146804 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80704 kB' 'KernelStack: 6352 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.381 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.382 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7525144 kB' 'MemAvailable: 9514348 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 856208 kB' 'Inactive: 1466684 kB' 'Active(anon): 129800 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120848 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146788 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80688 kB' 'KernelStack: 6316 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.383 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7525396 kB' 'MemAvailable: 9514600 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 855944 kB' 'Inactive: 1466684 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120608 kB' 'Mapped: 48632 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146828 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80728 kB' 'KernelStack: 6280 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.384 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.385 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.386 nr_hugepages=1024 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.386 resv_hugepages=0 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.386 surplus_hugepages=0 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.386 anon_hugepages=0 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.386 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7525916 kB' 'MemAvailable: 9515120 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 856204 kB' 'Inactive: 1466684 kB' 'Active(anon): 129796 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120868 kB' 'Mapped: 48632 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146828 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80728 kB' 'KernelStack: 6280 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.387 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7526356 kB' 'MemUsed: 4715624 kB' 'SwapCached: 0 kB' 'Active: 856208 kB' 'Inactive: 1466684 kB' 'Active(anon): 129800 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 2203564 kB' 'Mapped: 48632 kB' 'AnonPages: 120868 kB' 'Shmem: 10472 kB' 'KernelStack: 6280 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66100 kB' 'Slab: 146828 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.388 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.389 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.390 node0=1024 expecting 1024 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.390 00:05:03.390 real 0m0.708s 00:05:03.390 user 0m0.327s 00:05:03.390 sys 0m0.421s 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.390 17:56:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.390 ************************************ 00:05:03.390 END TEST even_2G_alloc 00:05:03.390 ************************************ 00:05:03.390 17:56:55 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:03.390 17:56:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.390 17:56:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.390 17:56:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.390 ************************************ 00:05:03.390 START TEST odd_alloc 00:05:03.390 ************************************ 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.390 17:56:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.959 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.959 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.959 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.959 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.959 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:03.959 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.959 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.959 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.959 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.959 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519092 kB' 'MemAvailable: 9508300 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 857192 kB' 'Inactive: 1466688 kB' 'Active(anon): 130784 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121896 kB' 'Mapped: 49080 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146852 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80752 kB' 'KernelStack: 6260 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.960 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519084 kB' 'MemAvailable: 9508292 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 856364 kB' 'Inactive: 1466688 kB' 'Active(anon): 129956 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121060 kB' 'Mapped: 48800 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146880 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80780 kB' 'KernelStack: 6240 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.961 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.962 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.225 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519084 kB' 'MemAvailable: 9508292 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 855892 kB' 'Inactive: 1466688 kB' 'Active(anon): 129484 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120628 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146888 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80788 kB' 'KernelStack: 6256 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.226 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.227 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:04.228 nr_hugepages=1025 00:05:04.228 resv_hugepages=0 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.228 surplus_hugepages=0 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.228 anon_hugepages=0 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519084 kB' 'MemAvailable: 9508292 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 856076 kB' 'Inactive: 1466688 kB' 'Active(anon): 129668 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120772 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146884 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80784 kB' 'KernelStack: 6240 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.228 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.229 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519084 kB' 'MemUsed: 4722896 kB' 'SwapCached: 0 kB' 'Active: 855944 kB' 'Inactive: 1466688 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2203568 kB' 'Mapped: 48680 kB' 'AnonPages: 120640 kB' 'Shmem: 10472 kB' 'KernelStack: 6240 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66100 kB' 'Slab: 146884 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.230 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.231 node0=1025 expecting 1025 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:04.231 00:05:04.231 real 0m0.711s 00:05:04.231 user 0m0.325s 00:05:04.231 sys 0m0.429s 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.231 17:56:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.231 ************************************ 00:05:04.231 END TEST odd_alloc 00:05:04.231 ************************************ 00:05:04.231 17:56:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:04.231 17:56:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.231 17:56:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.231 17:56:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.231 ************************************ 00:05:04.231 START TEST custom_alloc 00:05:04.231 ************************************ 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:04.231 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:04.232 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:04.232 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:04.232 17:56:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:04.232 17:56:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.232 17:56:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.754 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.754 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.754 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.754 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8570872 kB' 'MemAvailable: 10560084 kB' 'Buffers: 2436 kB' 'Cached: 2201136 kB' 'SwapCached: 0 kB' 'Active: 856412 kB' 'Inactive: 1466692 kB' 'Active(anon): 130004 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121068 kB' 'Mapped: 48768 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146968 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80868 kB' 'KernelStack: 6240 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.754 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.755 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8570872 kB' 'MemAvailable: 10560084 kB' 'Buffers: 2436 kB' 'Cached: 2201136 kB' 'SwapCached: 0 kB' 'Active: 856172 kB' 'Inactive: 1466692 kB' 'Active(anon): 129764 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 120892 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146960 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80860 kB' 'KernelStack: 6256 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.756 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.757 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8570872 kB' 'MemAvailable: 10560084 kB' 'Buffers: 2436 kB' 'Cached: 2201136 kB' 'SwapCached: 0 kB' 'Active: 855864 kB' 'Inactive: 1466692 kB' 'Active(anon): 129456 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 120608 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146960 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80860 kB' 'KernelStack: 6224 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.758 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:04.759 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.759 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.759 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.759 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.020 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.021 nr_hugepages=512 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:05.021 resv_hugepages=0 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.021 surplus_hugepages=0 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.021 anon_hugepages=0 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.021 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8571444 kB' 'MemAvailable: 10560656 kB' 'Buffers: 2436 kB' 'Cached: 2201136 kB' 'SwapCached: 0 kB' 'Active: 856044 kB' 'Inactive: 1466692 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 120772 kB' 'Mapped: 48680 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146944 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80844 kB' 'KernelStack: 6192 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 347788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.022 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8571896 kB' 'MemUsed: 3670084 kB' 'SwapCached: 0 kB' 'Active: 856116 kB' 'Inactive: 1466692 kB' 'Active(anon): 129708 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2203572 kB' 'Mapped: 48680 kB' 'AnonPages: 120908 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66100 kB' 'Slab: 146900 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.023 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.024 17:56:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.025 node0=512 expecting 512 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:05.025 00:05:05.025 real 0m0.714s 00:05:05.025 user 0m0.350s 00:05:05.025 sys 0m0.408s 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.025 17:56:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.025 ************************************ 00:05:05.025 END TEST custom_alloc 00:05:05.025 ************************************ 00:05:05.025 17:56:57 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:05.025 17:56:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.025 17:56:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.025 17:56:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.025 ************************************ 00:05:05.025 START TEST no_shrink_alloc 00:05:05.025 ************************************ 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.025 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.545 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.546 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.546 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.546 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534308 kB' 'MemAvailable: 9523516 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 853728 kB' 'Inactive: 1466688 kB' 'Active(anon): 127320 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118368 kB' 'Mapped: 48236 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146772 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80672 kB' 'KernelStack: 6212 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.546 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534308 kB' 'MemAvailable: 9523516 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 852904 kB' 'Inactive: 1466688 kB' 'Active(anon): 126496 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 117568 kB' 'Mapped: 48108 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146772 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80672 kB' 'KernelStack: 6208 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.547 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.548 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534328 kB' 'MemAvailable: 9523536 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 852696 kB' 'Inactive: 1466688 kB' 'Active(anon): 126288 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 117384 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146768 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80668 kB' 'KernelStack: 6176 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.549 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.550 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.551 nr_hugepages=1024 00:05:05.551 resv_hugepages=0 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.551 surplus_hugepages=0 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.551 anon_hugepages=0 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534120 kB' 'MemAvailable: 9523328 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 852932 kB' 'Inactive: 1466688 kB' 'Active(anon): 126524 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 117624 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146764 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80664 kB' 'KernelStack: 6176 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.551 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.552 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.812 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.813 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534120 kB' 'MemUsed: 4707860 kB' 'SwapCached: 0 kB' 'Active: 852632 kB' 'Inactive: 1466688 kB' 'Active(anon): 126224 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2203568 kB' 'Mapped: 47940 kB' 'AnonPages: 117644 kB' 'Shmem: 10472 kB' 'KernelStack: 6176 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66100 kB' 'Slab: 146768 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.814 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.815 node0=1024 expecting 1024 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.815 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.336 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.336 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.336 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.336 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.337 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534448 kB' 'MemAvailable: 9523656 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 853392 kB' 'Inactive: 1466688 kB' 'Active(anon): 126984 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118136 kB' 'Mapped: 47984 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146820 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80720 kB' 'KernelStack: 6192 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534448 kB' 'MemAvailable: 9523656 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 853180 kB' 'Inactive: 1466688 kB' 'Active(anon): 126772 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117932 kB' 'Mapped: 48200 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146812 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80712 kB' 'KernelStack: 6192 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 337972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.337 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534448 kB' 'MemAvailable: 9523652 kB' 'Buffers: 2436 kB' 'Cached: 2201128 kB' 'SwapCached: 0 kB' 'Active: 852736 kB' 'Inactive: 1466684 kB' 'Active(anon): 126328 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117544 kB' 'Mapped: 47828 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146808 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80708 kB' 'KernelStack: 6144 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.338 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.339 nr_hugepages=1024 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.339 resv_hugepages=0 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.339 surplus_hugepages=0 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.339 anon_hugepages=0 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.339 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7534448 kB' 'MemAvailable: 9523656 kB' 'Buffers: 2436 kB' 'Cached: 2201132 kB' 'SwapCached: 0 kB' 'Active: 852588 kB' 'Inactive: 1466688 kB' 'Active(anon): 126180 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117340 kB' 'Mapped: 47940 kB' 'Shmem: 10472 kB' 'KReclaimable: 66100 kB' 'Slab: 146808 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80708 kB' 'KernelStack: 6144 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54484 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 196460 kB' 'DirectMap2M: 5046272 kB' 'DirectMap1G: 9437184 kB' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.340 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7533944 kB' 'MemUsed: 4708036 kB' 'SwapCached: 0 kB' 'Active: 852556 kB' 'Inactive: 1466688 kB' 'Active(anon): 126148 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1466688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2203568 kB' 'Mapped: 47940 kB' 'AnonPages: 117308 kB' 'Shmem: 10472 kB' 'KernelStack: 6128 kB' 'PageTables: 3592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66100 kB' 'Slab: 146808 kB' 'SReclaimable: 66100 kB' 'SUnreclaim: 80708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.341 node0=1024 expecting 1024 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.341 00:05:06.341 real 0m1.409s 00:05:06.341 user 0m0.658s 00:05:06.341 sys 0m0.840s 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.341 17:56:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.341 ************************************ 00:05:06.341 END TEST no_shrink_alloc 00:05:06.341 ************************************ 00:05:06.341 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:06.341 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:06.341 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:06.341 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.341 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.341 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:06.341 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:06.600 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:06.600 17:56:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:06.600 00:05:06.600 real 0m6.212s 00:05:06.600 user 0m2.739s 00:05:06.600 sys 0m3.658s 00:05:06.600 17:56:58 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.600 17:56:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.600 ************************************ 00:05:06.600 END TEST hugepages 00:05:06.600 ************************************ 00:05:06.600 17:56:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:06.600 17:56:58 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.600 17:56:58 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.600 17:56:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.600 ************************************ 00:05:06.600 START TEST driver 00:05:06.600 ************************************ 00:05:06.600 17:56:58 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:06.600 * Looking for test storage... 00:05:06.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:06.600 17:56:58 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:06.600 17:56:58 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.600 17:56:58 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.166 17:57:04 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:13.166 17:57:04 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.166 17:57:04 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.166 17:57:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.166 ************************************ 00:05:13.166 START TEST guess_driver 00:05:13.166 ************************************ 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:13.166 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:13.166 Looking for driver=uio_pci_generic 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:13.166 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.167 17:57:04 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:13.167 17:57:04 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.167 17:57:04 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.167 17:57:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:13.167 17:57:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:13.167 17:57:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.735 17:57:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.735 17:57:06 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.344 00:05:20.344 real 0m7.137s 00:05:20.344 user 0m0.788s 00:05:20.344 sys 0m1.437s 00:05:20.344 17:57:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.344 ************************************ 00:05:20.344 END TEST guess_driver 00:05:20.344 ************************************ 00:05:20.344 17:57:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:20.344 00:05:20.344 real 0m13.193s 00:05:20.344 user 0m1.150s 00:05:20.344 sys 0m2.232s 00:05:20.344 17:57:12 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.344 17:57:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:20.344 ************************************ 00:05:20.344 END TEST driver 00:05:20.344 ************************************ 00:05:20.344 17:57:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:20.344 17:57:12 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.344 17:57:12 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.344 17:57:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:20.344 ************************************ 00:05:20.344 START TEST devices 00:05:20.344 ************************************ 00:05:20.344 17:57:12 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:20.344 * Looking for test storage... 00:05:20.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.344 17:57:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:20.344 17:57:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:20.344 17:57:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.344 17:57:12 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:20.908 17:57:13 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:20.908 17:57:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:20.908 17:57:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:20.908 No valid GPT data, bailing 00:05:20.908 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:20.908 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:20.908 17:57:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:20.908 17:57:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:20.908 17:57:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:20.908 17:57:13 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:20.908 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:20.908 17:57:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:20.908 17:57:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:21.167 No valid GPT data, bailing 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:21.167 No valid GPT data, bailing 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:21.167 No valid GPT data, bailing 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:21.167 17:57:13 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:21.167 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:21.167 17:57:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:21.167 No valid GPT data, bailing 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:21.427 17:57:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:21.427 17:57:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:21.427 17:57:13 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:21.427 No valid GPT data, bailing 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.427 17:57:13 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:21.427 17:57:13 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:21.427 17:57:13 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:21.427 17:57:13 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:21.427 17:57:13 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:21.427 17:57:13 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.427 17:57:13 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.427 17:57:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.427 ************************************ 00:05:21.427 START TEST nvme_mount 00:05:21.427 ************************************ 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:21.427 17:57:13 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:22.361 Creating new GPT entries in memory. 00:05:22.361 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.361 other utilities. 00:05:22.361 17:57:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.361 17:57:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.362 17:57:14 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.362 17:57:14 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.362 17:57:14 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:23.733 Creating new GPT entries in memory. 00:05:23.733 The operation has completed successfully. 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58774 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.733 17:57:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.733 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.733 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:23.733 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.733 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.733 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.733 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.992 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.992 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.992 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.992 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.992 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.992 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.251 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.251 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:24.509 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.509 17:57:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.768 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:24.768 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:24.768 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:24.768 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.768 17:57:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.027 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.285 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.285 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.285 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.285 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.542 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.542 17:57:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.799 17:57:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.056 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.314 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.314 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.314 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.314 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.572 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.572 17:57:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.829 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.829 00:05:26.829 real 0m5.321s 00:05:26.829 user 0m1.374s 00:05:26.829 sys 0m1.604s 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.829 17:57:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:26.829 ************************************ 00:05:26.829 END TEST nvme_mount 00:05:26.829 ************************************ 00:05:26.829 17:57:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:26.829 17:57:19 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.829 17:57:19 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.829 17:57:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:26.829 ************************************ 00:05:26.829 START TEST dm_mount 00:05:26.829 ************************************ 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:26.829 17:57:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:27.764 Creating new GPT entries in memory. 00:05:27.764 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:27.764 other utilities. 00:05:27.764 17:57:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:27.764 17:57:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.764 17:57:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.764 17:57:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.764 17:57:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:28.699 Creating new GPT entries in memory. 00:05:28.699 The operation has completed successfully. 00:05:28.699 17:57:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:28.699 17:57:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.699 17:57:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.699 17:57:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.699 17:57:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:30.113 The operation has completed successfully. 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59411 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.113 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.380 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.380 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.380 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.380 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.380 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.380 17:57:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.641 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.641 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.898 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.898 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:30.898 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.898 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.898 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.898 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.899 17:57:23 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.157 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.415 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.415 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.415 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.415 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.674 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.674 17:57:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.674 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.674 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.674 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:31.674 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:31.674 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:31.933 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:31.933 00:05:31.933 real 0m5.086s 00:05:31.933 user 0m0.966s 00:05:31.933 sys 0m1.051s 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.933 17:57:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:31.933 ************************************ 00:05:31.933 END TEST dm_mount 00:05:31.933 ************************************ 00:05:31.933 17:57:24 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:31.933 17:57:24 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:31.933 17:57:24 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.933 17:57:24 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.933 17:57:24 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:31.933 17:57:24 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.933 17:57:24 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.191 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.191 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.191 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:32.191 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:32.191 17:57:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:32.191 17:57:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.191 17:57:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.191 17:57:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.191 17:57:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:32.191 17:57:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.191 17:57:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:32.191 00:05:32.191 real 0m12.419s 00:05:32.191 user 0m3.255s 00:05:32.191 sys 0m3.437s 00:05:32.191 17:57:24 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.191 17:57:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:32.191 ************************************ 00:05:32.191 END TEST devices 00:05:32.191 ************************************ 00:05:32.191 ************************************ 00:05:32.191 END TEST setup.sh 00:05:32.191 ************************************ 00:05:32.191 00:05:32.191 real 0m44.199s 00:05:32.191 user 0m10.293s 00:05:32.191 sys 0m13.587s 00:05:32.191 17:57:24 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.191 17:57:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:32.191 17:57:24 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.326 Hugepages 00:05:33.326 node hugesize free / total 00:05:33.326 node0 1048576kB 0 / 0 00:05:33.326 node0 2048kB 2048 / 2048 00:05:33.326 00:05:33.326 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.326 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.326 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:33.632 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.632 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:33.632 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:33.632 17:57:25 -- spdk/autotest.sh@130 -- # uname -s 00:05:33.632 17:57:25 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:33.632 17:57:25 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:33.632 17:57:25 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.767 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.767 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.767 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.767 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.767 17:57:27 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:36.173 17:57:28 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:36.173 17:57:28 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:36.173 17:57:28 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.173 17:57:28 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:36.173 17:57:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:36.173 17:57:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:36.173 17:57:28 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.173 17:57:28 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.173 17:57:28 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:36.173 17:57:28 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:36.173 17:57:28 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.173 17:57:28 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.431 Waiting for block devices as requested 00:05:36.431 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.690 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.690 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.690 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.960 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:41.960 17:57:34 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:41.960 17:57:34 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:41.960 17:57:34 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:41.960 17:57:34 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:41.960 17:57:34 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:41.960 17:57:34 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1553 -- # continue 00:05:41.960 17:57:34 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:41.960 17:57:34 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:41.960 17:57:34 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:41.960 17:57:34 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1553 -- # continue 00:05:41.960 17:57:34 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:41.960 17:57:34 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:41.960 17:57:34 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:41.960 17:57:34 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:41.960 17:57:34 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1553 -- # continue 00:05:41.960 17:57:34 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:05:41.960 17:57:34 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:05:41.960 17:57:34 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:41.961 17:57:34 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:41.961 17:57:34 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:41.961 17:57:34 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:41.961 17:57:34 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:41.961 17:57:34 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:05:41.961 17:57:34 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:41.961 17:57:34 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:41.961 17:57:34 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:41.961 17:57:34 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:41.961 17:57:34 -- common/autotest_common.sh@1553 -- # continue 00:05:41.961 17:57:34 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:41.961 17:57:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.961 17:57:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.961 17:57:34 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:41.961 17:57:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.961 17:57:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.961 17:57:34 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.095 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.095 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.095 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.095 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.354 17:57:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:43.354 17:57:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.354 17:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 17:57:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:43.354 17:57:35 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:43.354 17:57:35 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.354 17:57:35 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:43.354 17:57:35 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:43.354 17:57:35 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:43.354 17:57:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:43.354 17:57:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:43.354 17:57:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.354 17:57:35 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.354 17:57:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:43.354 17:57:35 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:43.354 17:57:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:43.354 17:57:35 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:43.354 17:57:35 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.354 17:57:35 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:43.354 17:57:35 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.354 17:57:35 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:43.354 17:57:35 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.354 17:57:35 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:43.354 17:57:35 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:43.354 17:57:35 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.354 17:57:35 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:43.354 17:57:35 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:43.354 17:57:35 -- common/autotest_common.sh@1589 -- # return 0 00:05:43.354 17:57:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:43.354 17:57:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:43.354 17:57:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:43.354 17:57:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:43.354 17:57:35 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:43.354 17:57:35 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:43.354 17:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 17:57:35 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.354 17:57:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.354 17:57:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.354 17:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 ************************************ 00:05:43.354 START TEST env 00:05:43.354 ************************************ 00:05:43.354 17:57:35 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.613 * Looking for test storage... 00:05:43.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.613 17:57:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.613 17:57:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.613 17:57:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.613 17:57:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.613 ************************************ 00:05:43.613 START TEST env_memory 00:05:43.613 ************************************ 00:05:43.613 17:57:35 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.613 00:05:43.613 00:05:43.613 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.613 http://cunit.sourceforge.net/ 00:05:43.613 00:05:43.613 00:05:43.613 Suite: memory 00:05:43.613 Test: alloc and free memory map ...[2024-05-15 17:57:36.015654] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.613 passed 00:05:43.613 Test: mem map translation ...[2024-05-15 17:57:36.084468] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.613 [2024-05-15 17:57:36.084732] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.613 [2024-05-15 17:57:36.084988] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.613 [2024-05-15 17:57:36.085221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.873 passed 00:05:43.873 Test: mem map registration ...[2024-05-15 17:57:36.183682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:43.873 [2024-05-15 17:57:36.183935] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:43.873 passed 00:05:43.873 Test: mem map adjacent registrations ...passed 00:05:43.873 00:05:43.873 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.873 suites 1 1 n/a 0 0 00:05:43.873 tests 4 4 4 0 0 00:05:43.873 asserts 152 152 152 0 n/a 00:05:43.873 00:05:43.873 Elapsed time = 0.354 seconds 00:05:43.873 00:05:43.873 real 0m0.395s 00:05:43.873 user 0m0.361s 00:05:43.873 sys 0m0.030s 00:05:43.873 17:57:36 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.873 17:57:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:43.873 ************************************ 00:05:43.873 END TEST env_memory 00:05:43.873 ************************************ 00:05:43.873 17:57:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.873 17:57:36 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.873 17:57:36 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.873 17:57:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.873 ************************************ 00:05:43.873 START TEST env_vtophys 00:05:43.873 ************************************ 00:05:43.873 17:57:36 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.131 EAL: lib.eal log level changed from notice to debug 00:05:44.131 EAL: Detected lcore 0 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 1 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 2 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 3 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 4 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 5 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 6 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 7 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 8 as core 0 on socket 0 00:05:44.131 EAL: Detected lcore 9 as core 0 on socket 0 00:05:44.131 EAL: Maximum logical cores by configuration: 128 00:05:44.131 EAL: Detected CPU lcores: 10 00:05:44.131 EAL: Detected NUMA nodes: 1 00:05:44.131 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:44.131 EAL: Detected shared linkage of DPDK 00:05:44.131 EAL: No shared files mode enabled, IPC will be disabled 00:05:44.131 EAL: Selected IOVA mode 'PA' 00:05:44.131 EAL: Probing VFIO support... 00:05:44.132 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.132 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:44.132 EAL: Ask a virtual area of 0x2e000 bytes 00:05:44.132 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:44.132 EAL: Setting up physically contiguous memory... 00:05:44.132 EAL: Setting maximum number of open files to 524288 00:05:44.132 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:44.132 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:44.132 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.132 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:44.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.132 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.132 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:44.132 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:44.132 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.132 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:44.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.132 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.132 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:44.132 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:44.132 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.132 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:44.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.132 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.132 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:44.132 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:44.132 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.132 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:44.132 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.132 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.132 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:44.132 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:44.132 EAL: Hugepages will be freed exactly as allocated. 00:05:44.132 EAL: No shared files mode enabled, IPC is disabled 00:05:44.132 EAL: No shared files mode enabled, IPC is disabled 00:05:44.132 EAL: TSC frequency is ~2200000 KHz 00:05:44.132 EAL: Main lcore 0 is ready (tid=7f0548011a40;cpuset=[0]) 00:05:44.132 EAL: Trying to obtain current memory policy. 00:05:44.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.132 EAL: Restoring previous memory policy: 0 00:05:44.132 EAL: request: mp_malloc_sync 00:05:44.132 EAL: No shared files mode enabled, IPC is disabled 00:05:44.132 EAL: Heap on socket 0 was expanded by 2MB 00:05:44.132 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.132 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:44.132 EAL: Mem event callback 'spdk:(nil)' registered 00:05:44.132 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:44.132 00:05:44.132 00:05:44.132 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.132 http://cunit.sourceforge.net/ 00:05:44.132 00:05:44.132 00:05:44.132 Suite: components_suite 00:05:44.699 Test: vtophys_malloc_test ...passed 00:05:44.699 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:44.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.699 EAL: Restoring previous memory policy: 4 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was expanded by 4MB 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was shrunk by 4MB 00:05:44.699 EAL: Trying to obtain current memory policy. 00:05:44.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.699 EAL: Restoring previous memory policy: 4 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was expanded by 6MB 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was shrunk by 6MB 00:05:44.699 EAL: Trying to obtain current memory policy. 00:05:44.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.699 EAL: Restoring previous memory policy: 4 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was expanded by 10MB 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was shrunk by 10MB 00:05:44.699 EAL: Trying to obtain current memory policy. 00:05:44.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.699 EAL: Restoring previous memory policy: 4 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was expanded by 18MB 00:05:44.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.699 EAL: request: mp_malloc_sync 00:05:44.699 EAL: No shared files mode enabled, IPC is disabled 00:05:44.699 EAL: Heap on socket 0 was shrunk by 18MB 00:05:44.958 EAL: Trying to obtain current memory policy. 00:05:44.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.959 EAL: Restoring previous memory policy: 4 00:05:44.959 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.959 EAL: request: mp_malloc_sync 00:05:44.959 EAL: No shared files mode enabled, IPC is disabled 00:05:44.959 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.959 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.959 EAL: request: mp_malloc_sync 00:05:44.959 EAL: No shared files mode enabled, IPC is disabled 00:05:44.959 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.959 EAL: Trying to obtain current memory policy. 00:05:44.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.959 EAL: Restoring previous memory policy: 4 00:05:44.959 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.959 EAL: request: mp_malloc_sync 00:05:44.959 EAL: No shared files mode enabled, IPC is disabled 00:05:44.959 EAL: Heap on socket 0 was expanded by 66MB 00:05:44.959 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.218 EAL: request: mp_malloc_sync 00:05:45.218 EAL: No shared files mode enabled, IPC is disabled 00:05:45.218 EAL: Heap on socket 0 was shrunk by 66MB 00:05:45.218 EAL: Trying to obtain current memory policy. 00:05:45.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.218 EAL: Restoring previous memory policy: 4 00:05:45.218 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.218 EAL: request: mp_malloc_sync 00:05:45.218 EAL: No shared files mode enabled, IPC is disabled 00:05:45.218 EAL: Heap on socket 0 was expanded by 130MB 00:05:45.478 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.478 EAL: request: mp_malloc_sync 00:05:45.478 EAL: No shared files mode enabled, IPC is disabled 00:05:45.478 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.736 EAL: Trying to obtain current memory policy. 00:05:45.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.736 EAL: Restoring previous memory policy: 4 00:05:45.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.736 EAL: request: mp_malloc_sync 00:05:45.736 EAL: No shared files mode enabled, IPC is disabled 00:05:45.736 EAL: Heap on socket 0 was expanded by 258MB 00:05:45.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.251 EAL: request: mp_malloc_sync 00:05:46.251 EAL: No shared files mode enabled, IPC is disabled 00:05:46.251 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.509 EAL: Trying to obtain current memory policy. 00:05:46.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.768 EAL: Restoring previous memory policy: 4 00:05:46.768 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.768 EAL: request: mp_malloc_sync 00:05:46.768 EAL: No shared files mode enabled, IPC is disabled 00:05:46.768 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.703 EAL: request: mp_malloc_sync 00:05:47.703 EAL: No shared files mode enabled, IPC is disabled 00:05:47.703 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.270 EAL: Trying to obtain current memory policy. 00:05:48.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.529 EAL: Restoring previous memory policy: 4 00:05:48.529 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.529 EAL: request: mp_malloc_sync 00:05:48.529 EAL: No shared files mode enabled, IPC is disabled 00:05:48.529 EAL: Heap on socket 0 was expanded by 1026MB 00:05:50.430 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.430 EAL: request: mp_malloc_sync 00:05:50.430 EAL: No shared files mode enabled, IPC is disabled 00:05:50.430 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.805 passed 00:05:51.805 00:05:51.805 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.805 suites 1 1 n/a 0 0 00:05:51.805 tests 2 2 2 0 0 00:05:51.805 asserts 5439 5439 5439 0 n/a 00:05:51.805 00:05:51.805 Elapsed time = 7.622 seconds 00:05:51.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.805 EAL: request: mp_malloc_sync 00:05:51.805 EAL: No shared files mode enabled, IPC is disabled 00:05:51.805 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.805 EAL: No shared files mode enabled, IPC is disabled 00:05:51.805 EAL: No shared files mode enabled, IPC is disabled 00:05:51.805 EAL: No shared files mode enabled, IPC is disabled 00:05:51.805 00:05:51.805 real 0m7.929s 00:05:51.805 user 0m6.694s 00:05:51.805 sys 0m1.066s 00:05:51.805 17:57:44 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.805 ************************************ 00:05:51.805 END TEST env_vtophys 00:05:51.805 ************************************ 00:05:51.805 17:57:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:52.103 17:57:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.103 17:57:44 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.103 17:57:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.103 17:57:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.103 ************************************ 00:05:52.103 START TEST env_pci 00:05:52.103 ************************************ 00:05:52.103 17:57:44 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.103 00:05:52.103 00:05:52.103 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.103 http://cunit.sourceforge.net/ 00:05:52.103 00:05:52.103 00:05:52.103 Suite: pci 00:05:52.103 Test: pci_hook ...[2024-05-15 17:57:44.385482] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61238 has claimed it 00:05:52.103 passed 00:05:52.103 00:05:52.103 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.103 suites 1 1 n/a 0 0 00:05:52.103 tests 1 1 1 0 0 00:05:52.103 asserts 25 25 25 0 n/a 00:05:52.103 00:05:52.103 Elapsed time = 0.009 seconds 00:05:52.103 EAL: Cannot find device (10000:00:01.0) 00:05:52.103 EAL: Failed to attach device on primary process 00:05:52.103 ************************************ 00:05:52.103 END TEST env_pci 00:05:52.103 ************************************ 00:05:52.103 00:05:52.103 real 0m0.087s 00:05:52.103 user 0m0.039s 00:05:52.103 sys 0m0.048s 00:05:52.103 17:57:44 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.103 17:57:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.103 17:57:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.103 17:57:44 env -- env/env.sh@15 -- # uname 00:05:52.103 17:57:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.103 17:57:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.103 17:57:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.103 17:57:44 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:52.103 17:57:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.103 17:57:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.103 ************************************ 00:05:52.103 START TEST env_dpdk_post_init 00:05:52.103 ************************************ 00:05:52.103 17:57:44 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.103 EAL: Detected CPU lcores: 10 00:05:52.103 EAL: Detected NUMA nodes: 1 00:05:52.103 EAL: Detected shared linkage of DPDK 00:05:52.103 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.103 EAL: Selected IOVA mode 'PA' 00:05:52.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.361 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:52.361 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:52.361 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:52.361 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:52.361 Starting DPDK initialization... 00:05:52.361 Starting SPDK post initialization... 00:05:52.361 SPDK NVMe probe 00:05:52.361 Attaching to 0000:00:10.0 00:05:52.361 Attaching to 0000:00:11.0 00:05:52.361 Attaching to 0000:00:12.0 00:05:52.361 Attaching to 0000:00:13.0 00:05:52.361 Attached to 0000:00:13.0 00:05:52.361 Attached to 0000:00:10.0 00:05:52.361 Attached to 0000:00:11.0 00:05:52.361 Attached to 0000:00:12.0 00:05:52.361 Cleaning up... 00:05:52.361 ************************************ 00:05:52.361 END TEST env_dpdk_post_init 00:05:52.361 ************************************ 00:05:52.361 00:05:52.361 real 0m0.291s 00:05:52.361 user 0m0.097s 00:05:52.361 sys 0m0.096s 00:05:52.361 17:57:44 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.361 17:57:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.361 17:57:44 env -- env/env.sh@26 -- # uname 00:05:52.361 17:57:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.361 17:57:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.361 17:57:44 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.361 17:57:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.361 17:57:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.361 ************************************ 00:05:52.361 START TEST env_mem_callbacks 00:05:52.361 ************************************ 00:05:52.361 17:57:44 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.620 EAL: Detected CPU lcores: 10 00:05:52.620 EAL: Detected NUMA nodes: 1 00:05:52.620 EAL: Detected shared linkage of DPDK 00:05:52.620 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.620 EAL: Selected IOVA mode 'PA' 00:05:52.620 00:05:52.620 00:05:52.620 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.620 http://cunit.sourceforge.net/ 00:05:52.620 00:05:52.620 00:05:52.620 Suite: memory 00:05:52.620 Test: test ... 00:05:52.620 register 0x200000200000 2097152 00:05:52.620 malloc 3145728 00:05:52.620 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.620 register 0x200000400000 4194304 00:05:52.620 buf 0x2000004fffc0 len 3145728 PASSED 00:05:52.620 malloc 64 00:05:52.620 buf 0x2000004ffec0 len 64 PASSED 00:05:52.620 malloc 4194304 00:05:52.620 register 0x200000800000 6291456 00:05:52.620 buf 0x2000009fffc0 len 4194304 PASSED 00:05:52.620 free 0x2000004fffc0 3145728 00:05:52.620 free 0x2000004ffec0 64 00:05:52.620 unregister 0x200000400000 4194304 PASSED 00:05:52.620 free 0x2000009fffc0 4194304 00:05:52.620 unregister 0x200000800000 6291456 PASSED 00:05:52.620 malloc 8388608 00:05:52.620 register 0x200000400000 10485760 00:05:52.620 buf 0x2000005fffc0 len 8388608 PASSED 00:05:52.620 free 0x2000005fffc0 8388608 00:05:52.620 unregister 0x200000400000 10485760 PASSED 00:05:52.620 passed 00:05:52.620 00:05:52.620 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.620 suites 1 1 n/a 0 0 00:05:52.620 tests 1 1 1 0 0 00:05:52.620 asserts 15 15 15 0 n/a 00:05:52.620 00:05:52.620 Elapsed time = 0.060 seconds 00:05:52.879 00:05:52.879 real 0m0.281s 00:05:52.879 user 0m0.096s 00:05:52.879 sys 0m0.081s 00:05:52.879 17:57:45 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.879 17:57:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:52.879 ************************************ 00:05:52.879 END TEST env_mem_callbacks 00:05:52.879 ************************************ 00:05:52.879 ************************************ 00:05:52.879 END TEST env 00:05:52.879 ************************************ 00:05:52.879 00:05:52.879 real 0m9.335s 00:05:52.879 user 0m7.401s 00:05:52.879 sys 0m1.543s 00:05:52.879 17:57:45 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.879 17:57:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.879 17:57:45 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.879 17:57:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.879 17:57:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.879 17:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:52.879 ************************************ 00:05:52.879 START TEST rpc 00:05:52.879 ************************************ 00:05:52.879 17:57:45 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.879 * Looking for test storage... 00:05:52.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.879 17:57:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=61357 00:05:52.879 17:57:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:52.879 17:57:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.879 17:57:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 61357 00:05:52.879 17:57:45 rpc -- common/autotest_common.sh@827 -- # '[' -z 61357 ']' 00:05:52.879 17:57:45 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.879 17:57:45 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.879 17:57:45 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.879 17:57:45 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.879 17:57:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.138 [2024-05-15 17:57:45.411154] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:05:53.138 [2024-05-15 17:57:45.411355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61357 ] 00:05:53.138 [2024-05-15 17:57:45.589261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.396 [2024-05-15 17:57:45.881812] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.396 [2024-05-15 17:57:45.881888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61357' to capture a snapshot of events at runtime. 00:05:53.396 [2024-05-15 17:57:45.881908] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.396 [2024-05-15 17:57:45.881925] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.396 [2024-05-15 17:57:45.881937] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61357 for offline analysis/debug. 00:05:53.396 [2024-05-15 17:57:45.881984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.332 17:57:46 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.332 17:57:46 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:54.332 17:57:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.332 17:57:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.332 17:57:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.332 17:57:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.332 17:57:46 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.332 17:57:46 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.332 17:57:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.332 ************************************ 00:05:54.332 START TEST rpc_integrity 00:05:54.332 ************************************ 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.332 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.332 { 00:05:54.332 "name": "Malloc0", 00:05:54.332 "aliases": [ 00:05:54.332 "2ef4f5bd-2371-4f59-996d-b60f13a1df52" 00:05:54.332 ], 00:05:54.332 "product_name": "Malloc disk", 00:05:54.332 "block_size": 512, 00:05:54.332 "num_blocks": 16384, 00:05:54.332 "uuid": "2ef4f5bd-2371-4f59-996d-b60f13a1df52", 00:05:54.332 "assigned_rate_limits": { 00:05:54.332 "rw_ios_per_sec": 0, 00:05:54.332 "rw_mbytes_per_sec": 0, 00:05:54.332 "r_mbytes_per_sec": 0, 00:05:54.332 "w_mbytes_per_sec": 0 00:05:54.332 }, 00:05:54.332 "claimed": false, 00:05:54.332 "zoned": false, 00:05:54.332 "supported_io_types": { 00:05:54.332 "read": true, 00:05:54.332 "write": true, 00:05:54.332 "unmap": true, 00:05:54.332 "write_zeroes": true, 00:05:54.332 "flush": true, 00:05:54.332 "reset": true, 00:05:54.332 "compare": false, 00:05:54.332 "compare_and_write": false, 00:05:54.332 "abort": true, 00:05:54.332 "nvme_admin": false, 00:05:54.332 "nvme_io": false 00:05:54.332 }, 00:05:54.332 "memory_domains": [ 00:05:54.332 { 00:05:54.332 "dma_device_id": "system", 00:05:54.332 "dma_device_type": 1 00:05:54.332 }, 00:05:54.332 { 00:05:54.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.332 "dma_device_type": 2 00:05:54.332 } 00:05:54.332 ], 00:05:54.332 "driver_specific": {} 00:05:54.332 } 00:05:54.332 ]' 00:05:54.332 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.592 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.592 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:54.592 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.592 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.592 [2024-05-15 17:57:46.850631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:54.592 [2024-05-15 17:57:46.850738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.592 [2024-05-15 17:57:46.850778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:54.592 [2024-05-15 17:57:46.850810] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.592 [2024-05-15 17:57:46.853659] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.592 [2024-05-15 17:57:46.853744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.592 Passthru0 00:05:54.592 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.592 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.592 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.592 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.592 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.592 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.592 { 00:05:54.592 "name": "Malloc0", 00:05:54.592 "aliases": [ 00:05:54.592 "2ef4f5bd-2371-4f59-996d-b60f13a1df52" 00:05:54.592 ], 00:05:54.592 "product_name": "Malloc disk", 00:05:54.592 "block_size": 512, 00:05:54.592 "num_blocks": 16384, 00:05:54.592 "uuid": "2ef4f5bd-2371-4f59-996d-b60f13a1df52", 00:05:54.592 "assigned_rate_limits": { 00:05:54.592 "rw_ios_per_sec": 0, 00:05:54.592 "rw_mbytes_per_sec": 0, 00:05:54.592 "r_mbytes_per_sec": 0, 00:05:54.592 "w_mbytes_per_sec": 0 00:05:54.592 }, 00:05:54.592 "claimed": true, 00:05:54.592 "claim_type": "exclusive_write", 00:05:54.592 "zoned": false, 00:05:54.592 "supported_io_types": { 00:05:54.592 "read": true, 00:05:54.593 "write": true, 00:05:54.593 "unmap": true, 00:05:54.593 "write_zeroes": true, 00:05:54.593 "flush": true, 00:05:54.593 "reset": true, 00:05:54.593 "compare": false, 00:05:54.593 "compare_and_write": false, 00:05:54.593 "abort": true, 00:05:54.593 "nvme_admin": false, 00:05:54.593 "nvme_io": false 00:05:54.593 }, 00:05:54.593 "memory_domains": [ 00:05:54.593 { 00:05:54.593 "dma_device_id": "system", 00:05:54.593 "dma_device_type": 1 00:05:54.593 }, 00:05:54.593 { 00:05:54.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.593 "dma_device_type": 2 00:05:54.593 } 00:05:54.593 ], 00:05:54.593 "driver_specific": {} 00:05:54.593 }, 00:05:54.593 { 00:05:54.593 "name": "Passthru0", 00:05:54.593 "aliases": [ 00:05:54.593 "c1ce27a0-9e6d-5ec0-bd56-dc81172a91a2" 00:05:54.593 ], 00:05:54.593 "product_name": "passthru", 00:05:54.593 "block_size": 512, 00:05:54.593 "num_blocks": 16384, 00:05:54.593 "uuid": "c1ce27a0-9e6d-5ec0-bd56-dc81172a91a2", 00:05:54.593 "assigned_rate_limits": { 00:05:54.593 "rw_ios_per_sec": 0, 00:05:54.593 "rw_mbytes_per_sec": 0, 00:05:54.593 "r_mbytes_per_sec": 0, 00:05:54.593 "w_mbytes_per_sec": 0 00:05:54.593 }, 00:05:54.593 "claimed": false, 00:05:54.593 "zoned": false, 00:05:54.593 "supported_io_types": { 00:05:54.593 "read": true, 00:05:54.593 "write": true, 00:05:54.593 "unmap": true, 00:05:54.593 "write_zeroes": true, 00:05:54.593 "flush": true, 00:05:54.593 "reset": true, 00:05:54.593 "compare": false, 00:05:54.593 "compare_and_write": false, 00:05:54.593 "abort": true, 00:05:54.593 "nvme_admin": false, 00:05:54.593 "nvme_io": false 00:05:54.593 }, 00:05:54.593 "memory_domains": [ 00:05:54.593 { 00:05:54.593 "dma_device_id": "system", 00:05:54.593 "dma_device_type": 1 00:05:54.593 }, 00:05:54.593 { 00:05:54.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.593 "dma_device_type": 2 00:05:54.593 } 00:05:54.593 ], 00:05:54.593 "driver_specific": { 00:05:54.593 "passthru": { 00:05:54.593 "name": "Passthru0", 00:05:54.593 "base_bdev_name": "Malloc0" 00:05:54.593 } 00:05:54.593 } 00:05:54.593 } 00:05:54.593 ]' 00:05:54.593 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.593 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.593 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.593 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.593 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.593 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.593 17:57:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.593 17:57:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.593 17:57:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.593 ************************************ 00:05:54.593 END TEST rpc_integrity 00:05:54.593 ************************************ 00:05:54.593 17:57:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.593 00:05:54.593 real 0m0.362s 00:05:54.593 user 0m0.228s 00:05:54.593 sys 0m0.038s 00:05:54.593 17:57:47 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.593 17:57:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 17:57:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.854 17:57:47 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.854 17:57:47 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.854 17:57:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 ************************************ 00:05:54.854 START TEST rpc_plugins 00:05:54.854 ************************************ 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.854 { 00:05:54.854 "name": "Malloc1", 00:05:54.854 "aliases": [ 00:05:54.854 "d8cebdef-2c05-4f6c-aa90-c4d1af657eb1" 00:05:54.854 ], 00:05:54.854 "product_name": "Malloc disk", 00:05:54.854 "block_size": 4096, 00:05:54.854 "num_blocks": 256, 00:05:54.854 "uuid": "d8cebdef-2c05-4f6c-aa90-c4d1af657eb1", 00:05:54.854 "assigned_rate_limits": { 00:05:54.854 "rw_ios_per_sec": 0, 00:05:54.854 "rw_mbytes_per_sec": 0, 00:05:54.854 "r_mbytes_per_sec": 0, 00:05:54.854 "w_mbytes_per_sec": 0 00:05:54.854 }, 00:05:54.854 "claimed": false, 00:05:54.854 "zoned": false, 00:05:54.854 "supported_io_types": { 00:05:54.854 "read": true, 00:05:54.854 "write": true, 00:05:54.854 "unmap": true, 00:05:54.854 "write_zeroes": true, 00:05:54.854 "flush": true, 00:05:54.854 "reset": true, 00:05:54.854 "compare": false, 00:05:54.854 "compare_and_write": false, 00:05:54.854 "abort": true, 00:05:54.854 "nvme_admin": false, 00:05:54.854 "nvme_io": false 00:05:54.854 }, 00:05:54.854 "memory_domains": [ 00:05:54.854 { 00:05:54.854 "dma_device_id": "system", 00:05:54.854 "dma_device_type": 1 00:05:54.854 }, 00:05:54.854 { 00:05:54.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.854 "dma_device_type": 2 00:05:54.854 } 00:05:54.854 ], 00:05:54.854 "driver_specific": {} 00:05:54.854 } 00:05:54.854 ]' 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:54.854 ************************************ 00:05:54.854 END TEST rpc_plugins 00:05:54.854 ************************************ 00:05:54.854 17:57:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:54.854 00:05:54.854 real 0m0.170s 00:05:54.854 user 0m0.108s 00:05:54.854 sys 0m0.018s 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.854 17:57:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 17:57:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:54.854 17:57:47 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.854 17:57:47 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.854 17:57:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 ************************************ 00:05:54.854 START TEST rpc_trace_cmd_test 00:05:54.854 ************************************ 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:54.854 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61357", 00:05:54.854 "tpoint_group_mask": "0x8", 00:05:54.854 "iscsi_conn": { 00:05:54.854 "mask": "0x2", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "scsi": { 00:05:54.854 "mask": "0x4", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "bdev": { 00:05:54.854 "mask": "0x8", 00:05:54.854 "tpoint_mask": "0xffffffffffffffff" 00:05:54.854 }, 00:05:54.854 "nvmf_rdma": { 00:05:54.854 "mask": "0x10", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "nvmf_tcp": { 00:05:54.854 "mask": "0x20", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "ftl": { 00:05:54.854 "mask": "0x40", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "blobfs": { 00:05:54.854 "mask": "0x80", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "dsa": { 00:05:54.854 "mask": "0x200", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "thread": { 00:05:54.854 "mask": "0x400", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "nvme_pcie": { 00:05:54.854 "mask": "0x800", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "iaa": { 00:05:54.854 "mask": "0x1000", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "nvme_tcp": { 00:05:54.854 "mask": "0x2000", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "bdev_nvme": { 00:05:54.854 "mask": "0x4000", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 }, 00:05:54.854 "sock": { 00:05:54.854 "mask": "0x8000", 00:05:54.854 "tpoint_mask": "0x0" 00:05:54.854 } 00:05:54.854 }' 00:05:54.854 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.113 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:55.372 ************************************ 00:05:55.372 END TEST rpc_trace_cmd_test 00:05:55.372 ************************************ 00:05:55.372 17:57:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:55.372 00:05:55.372 real 0m0.288s 00:05:55.372 user 0m0.247s 00:05:55.372 sys 0m0.032s 00:05:55.372 17:57:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.372 17:57:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.372 17:57:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:55.372 17:57:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:55.372 17:57:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:55.372 17:57:47 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.372 17:57:47 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.372 17:57:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.372 ************************************ 00:05:55.372 START TEST rpc_daemon_integrity 00:05:55.372 ************************************ 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.372 { 00:05:55.372 "name": "Malloc2", 00:05:55.372 "aliases": [ 00:05:55.372 "71263bce-04be-45e0-8c0f-8b613f093e28" 00:05:55.372 ], 00:05:55.372 "product_name": "Malloc disk", 00:05:55.372 "block_size": 512, 00:05:55.372 "num_blocks": 16384, 00:05:55.372 "uuid": "71263bce-04be-45e0-8c0f-8b613f093e28", 00:05:55.372 "assigned_rate_limits": { 00:05:55.372 "rw_ios_per_sec": 0, 00:05:55.372 "rw_mbytes_per_sec": 0, 00:05:55.372 "r_mbytes_per_sec": 0, 00:05:55.372 "w_mbytes_per_sec": 0 00:05:55.372 }, 00:05:55.372 "claimed": false, 00:05:55.372 "zoned": false, 00:05:55.372 "supported_io_types": { 00:05:55.372 "read": true, 00:05:55.372 "write": true, 00:05:55.372 "unmap": true, 00:05:55.372 "write_zeroes": true, 00:05:55.372 "flush": true, 00:05:55.372 "reset": true, 00:05:55.372 "compare": false, 00:05:55.372 "compare_and_write": false, 00:05:55.372 "abort": true, 00:05:55.372 "nvme_admin": false, 00:05:55.372 "nvme_io": false 00:05:55.372 }, 00:05:55.372 "memory_domains": [ 00:05:55.372 { 00:05:55.372 "dma_device_id": "system", 00:05:55.372 "dma_device_type": 1 00:05:55.372 }, 00:05:55.372 { 00:05:55.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.372 "dma_device_type": 2 00:05:55.372 } 00:05:55.372 ], 00:05:55.372 "driver_specific": {} 00:05:55.372 } 00:05:55.372 ]' 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.372 [2024-05-15 17:57:47.833629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:55.372 [2024-05-15 17:57:47.833768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.372 [2024-05-15 17:57:47.833798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:55.372 [2024-05-15 17:57:47.833815] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.372 [2024-05-15 17:57:47.836776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.372 [2024-05-15 17:57:47.836844] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.372 Passthru0 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.372 { 00:05:55.372 "name": "Malloc2", 00:05:55.372 "aliases": [ 00:05:55.372 "71263bce-04be-45e0-8c0f-8b613f093e28" 00:05:55.372 ], 00:05:55.372 "product_name": "Malloc disk", 00:05:55.372 "block_size": 512, 00:05:55.372 "num_blocks": 16384, 00:05:55.372 "uuid": "71263bce-04be-45e0-8c0f-8b613f093e28", 00:05:55.372 "assigned_rate_limits": { 00:05:55.372 "rw_ios_per_sec": 0, 00:05:55.372 "rw_mbytes_per_sec": 0, 00:05:55.372 "r_mbytes_per_sec": 0, 00:05:55.372 "w_mbytes_per_sec": 0 00:05:55.372 }, 00:05:55.372 "claimed": true, 00:05:55.372 "claim_type": "exclusive_write", 00:05:55.372 "zoned": false, 00:05:55.372 "supported_io_types": { 00:05:55.372 "read": true, 00:05:55.372 "write": true, 00:05:55.372 "unmap": true, 00:05:55.372 "write_zeroes": true, 00:05:55.372 "flush": true, 00:05:55.372 "reset": true, 00:05:55.372 "compare": false, 00:05:55.372 "compare_and_write": false, 00:05:55.372 "abort": true, 00:05:55.372 "nvme_admin": false, 00:05:55.372 "nvme_io": false 00:05:55.372 }, 00:05:55.372 "memory_domains": [ 00:05:55.372 { 00:05:55.372 "dma_device_id": "system", 00:05:55.372 "dma_device_type": 1 00:05:55.372 }, 00:05:55.372 { 00:05:55.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.372 "dma_device_type": 2 00:05:55.372 } 00:05:55.372 ], 00:05:55.372 "driver_specific": {} 00:05:55.372 }, 00:05:55.372 { 00:05:55.372 "name": "Passthru0", 00:05:55.372 "aliases": [ 00:05:55.372 "41abb6ae-7512-56ad-b433-e20013024abf" 00:05:55.372 ], 00:05:55.372 "product_name": "passthru", 00:05:55.372 "block_size": 512, 00:05:55.372 "num_blocks": 16384, 00:05:55.372 "uuid": "41abb6ae-7512-56ad-b433-e20013024abf", 00:05:55.372 "assigned_rate_limits": { 00:05:55.372 "rw_ios_per_sec": 0, 00:05:55.372 "rw_mbytes_per_sec": 0, 00:05:55.372 "r_mbytes_per_sec": 0, 00:05:55.372 "w_mbytes_per_sec": 0 00:05:55.372 }, 00:05:55.372 "claimed": false, 00:05:55.372 "zoned": false, 00:05:55.372 "supported_io_types": { 00:05:55.372 "read": true, 00:05:55.372 "write": true, 00:05:55.372 "unmap": true, 00:05:55.372 "write_zeroes": true, 00:05:55.372 "flush": true, 00:05:55.372 "reset": true, 00:05:55.372 "compare": false, 00:05:55.372 "compare_and_write": false, 00:05:55.372 "abort": true, 00:05:55.372 "nvme_admin": false, 00:05:55.372 "nvme_io": false 00:05:55.372 }, 00:05:55.372 "memory_domains": [ 00:05:55.372 { 00:05:55.372 "dma_device_id": "system", 00:05:55.372 "dma_device_type": 1 00:05:55.372 }, 00:05:55.372 { 00:05:55.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.372 "dma_device_type": 2 00:05:55.372 } 00:05:55.372 ], 00:05:55.372 "driver_specific": { 00:05:55.372 "passthru": { 00:05:55.372 "name": "Passthru0", 00:05:55.372 "base_bdev_name": "Malloc2" 00:05:55.372 } 00:05:55.372 } 00:05:55.372 } 00:05:55.372 ]' 00:05:55.372 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.630 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.630 ************************************ 00:05:55.630 END TEST rpc_daemon_integrity 00:05:55.630 ************************************ 00:05:55.630 17:57:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.630 00:05:55.630 real 0m0.339s 00:05:55.630 user 0m0.204s 00:05:55.630 sys 0m0.040s 00:05:55.630 17:57:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.630 17:57:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.630 17:57:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:55.630 17:57:48 rpc -- rpc/rpc.sh@84 -- # killprocess 61357 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@946 -- # '[' -z 61357 ']' 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@950 -- # kill -0 61357 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@951 -- # uname 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61357 00:05:55.630 killing process with pid 61357 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61357' 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@965 -- # kill 61357 00:05:55.630 17:57:48 rpc -- common/autotest_common.sh@970 -- # wait 61357 00:05:58.157 00:05:58.157 real 0m5.003s 00:05:58.157 user 0m5.638s 00:05:58.157 sys 0m0.871s 00:05:58.158 17:57:50 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.158 ************************************ 00:05:58.158 END TEST rpc 00:05:58.158 ************************************ 00:05:58.158 17:57:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.158 17:57:50 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:58.158 17:57:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.158 17:57:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.158 17:57:50 -- common/autotest_common.sh@10 -- # set +x 00:05:58.158 ************************************ 00:05:58.158 START TEST skip_rpc 00:05:58.158 ************************************ 00:05:58.158 17:57:50 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:58.158 * Looking for test storage... 00:05:58.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.158 17:57:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.158 17:57:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:58.158 17:57:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:58.158 17:57:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.158 17:57:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.158 17:57:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.158 ************************************ 00:05:58.158 START TEST skip_rpc 00:05:58.158 ************************************ 00:05:58.158 17:57:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:58.158 17:57:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61578 00:05:58.158 17:57:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.158 17:57:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:58.158 17:57:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:58.158 [2024-05-15 17:57:50.484745] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:05:58.158 [2024-05-15 17:57:50.484947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61578 ] 00:05:58.415 [2024-05-15 17:57:50.661836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.415 [2024-05-15 17:57:50.898099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 61578 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 61578 ']' 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 61578 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61578 00:06:03.676 killing process with pid 61578 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61578' 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 61578 00:06:03.676 17:57:55 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 61578 00:06:05.575 00:06:05.575 real 0m7.209s 00:06:05.575 user 0m6.657s 00:06:05.575 sys 0m0.442s 00:06:05.575 17:57:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.575 ************************************ 00:06:05.575 END TEST skip_rpc 00:06:05.575 ************************************ 00:06:05.575 17:57:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.575 17:57:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:05.575 17:57:57 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.575 17:57:57 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.575 17:57:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.575 ************************************ 00:06:05.575 START TEST skip_rpc_with_json 00:06:05.575 ************************************ 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61682 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61682 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 61682 ']' 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.575 17:57:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.575 [2024-05-15 17:57:57.708885] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:05.575 [2024-05-15 17:57:57.709033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61682 ] 00:06:05.575 [2024-05-15 17:57:57.873539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.833 [2024-05-15 17:57:58.111842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.769 [2024-05-15 17:57:58.955193] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:06.769 request: 00:06:06.769 { 00:06:06.769 "trtype": "tcp", 00:06:06.769 "method": "nvmf_get_transports", 00:06:06.769 "req_id": 1 00:06:06.769 } 00:06:06.769 Got JSON-RPC error response 00:06:06.769 response: 00:06:06.769 { 00:06:06.769 "code": -19, 00:06:06.769 "message": "No such device" 00:06:06.769 } 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.769 [2024-05-15 17:57:58.967315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.769 17:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.769 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.769 17:57:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.769 { 00:06:06.769 "subsystems": [ 00:06:06.769 { 00:06:06.769 "subsystem": "keyring", 00:06:06.769 "config": [] 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "subsystem": "iobuf", 00:06:06.769 "config": [ 00:06:06.769 { 00:06:06.769 "method": "iobuf_set_options", 00:06:06.769 "params": { 00:06:06.769 "small_pool_count": 8192, 00:06:06.769 "large_pool_count": 1024, 00:06:06.769 "small_bufsize": 8192, 00:06:06.769 "large_bufsize": 135168 00:06:06.769 } 00:06:06.769 } 00:06:06.769 ] 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "subsystem": "sock", 00:06:06.769 "config": [ 00:06:06.769 { 00:06:06.769 "method": "sock_impl_set_options", 00:06:06.769 "params": { 00:06:06.769 "impl_name": "posix", 00:06:06.769 "recv_buf_size": 2097152, 00:06:06.769 "send_buf_size": 2097152, 00:06:06.769 "enable_recv_pipe": true, 00:06:06.769 "enable_quickack": false, 00:06:06.769 "enable_placement_id": 0, 00:06:06.769 "enable_zerocopy_send_server": true, 00:06:06.769 "enable_zerocopy_send_client": false, 00:06:06.769 "zerocopy_threshold": 0, 00:06:06.769 "tls_version": 0, 00:06:06.769 "enable_ktls": false 00:06:06.769 } 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "method": "sock_impl_set_options", 00:06:06.769 "params": { 00:06:06.769 "impl_name": "ssl", 00:06:06.769 "recv_buf_size": 4096, 00:06:06.769 "send_buf_size": 4096, 00:06:06.769 "enable_recv_pipe": true, 00:06:06.769 "enable_quickack": false, 00:06:06.769 "enable_placement_id": 0, 00:06:06.769 "enable_zerocopy_send_server": true, 00:06:06.769 "enable_zerocopy_send_client": false, 00:06:06.769 "zerocopy_threshold": 0, 00:06:06.769 "tls_version": 0, 00:06:06.769 "enable_ktls": false 00:06:06.769 } 00:06:06.769 } 00:06:06.769 ] 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "subsystem": "vmd", 00:06:06.769 "config": [] 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "subsystem": "accel", 00:06:06.769 "config": [ 00:06:06.769 { 00:06:06.769 "method": "accel_set_options", 00:06:06.769 "params": { 00:06:06.769 "small_cache_size": 128, 00:06:06.769 "large_cache_size": 16, 00:06:06.769 "task_count": 2048, 00:06:06.769 "sequence_count": 2048, 00:06:06.769 "buf_count": 2048 00:06:06.769 } 00:06:06.769 } 00:06:06.769 ] 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "subsystem": "bdev", 00:06:06.769 "config": [ 00:06:06.769 { 00:06:06.769 "method": "bdev_set_options", 00:06:06.769 "params": { 00:06:06.769 "bdev_io_pool_size": 65535, 00:06:06.769 "bdev_io_cache_size": 256, 00:06:06.769 "bdev_auto_examine": true, 00:06:06.769 "iobuf_small_cache_size": 128, 00:06:06.769 "iobuf_large_cache_size": 16 00:06:06.769 } 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "method": "bdev_raid_set_options", 00:06:06.769 "params": { 00:06:06.769 "process_window_size_kb": 1024 00:06:06.769 } 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "method": "bdev_iscsi_set_options", 00:06:06.769 "params": { 00:06:06.769 "timeout_sec": 30 00:06:06.769 } 00:06:06.769 }, 00:06:06.769 { 00:06:06.769 "method": "bdev_nvme_set_options", 00:06:06.769 "params": { 00:06:06.769 "action_on_timeout": "none", 00:06:06.769 "timeout_us": 0, 00:06:06.769 "timeout_admin_us": 0, 00:06:06.769 "keep_alive_timeout_ms": 10000, 00:06:06.769 "arbitration_burst": 0, 00:06:06.769 "low_priority_weight": 0, 00:06:06.769 "medium_priority_weight": 0, 00:06:06.769 "high_priority_weight": 0, 00:06:06.769 "nvme_adminq_poll_period_us": 10000, 00:06:06.769 "nvme_ioq_poll_period_us": 0, 00:06:06.769 "io_queue_requests": 0, 00:06:06.769 "delay_cmd_submit": true, 00:06:06.770 "transport_retry_count": 4, 00:06:06.770 "bdev_retry_count": 3, 00:06:06.770 "transport_ack_timeout": 0, 00:06:06.770 "ctrlr_loss_timeout_sec": 0, 00:06:06.770 "reconnect_delay_sec": 0, 00:06:06.770 "fast_io_fail_timeout_sec": 0, 00:06:06.770 "disable_auto_failback": false, 00:06:06.770 "generate_uuids": false, 00:06:06.770 "transport_tos": 0, 00:06:06.770 "nvme_error_stat": false, 00:06:06.770 "rdma_srq_size": 0, 00:06:06.770 "io_path_stat": false, 00:06:06.770 "allow_accel_sequence": false, 00:06:06.770 "rdma_max_cq_size": 0, 00:06:06.770 "rdma_cm_event_timeout_ms": 0, 00:06:06.770 "dhchap_digests": [ 00:06:06.770 "sha256", 00:06:06.770 "sha384", 00:06:06.770 "sha512" 00:06:06.770 ], 00:06:06.770 "dhchap_dhgroups": [ 00:06:06.770 "null", 00:06:06.770 "ffdhe2048", 00:06:06.770 "ffdhe3072", 00:06:06.770 "ffdhe4096", 00:06:06.770 "ffdhe6144", 00:06:06.770 "ffdhe8192" 00:06:06.770 ] 00:06:06.770 } 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "method": "bdev_nvme_set_hotplug", 00:06:06.770 "params": { 00:06:06.770 "period_us": 100000, 00:06:06.770 "enable": false 00:06:06.770 } 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "method": "bdev_wait_for_examine" 00:06:06.770 } 00:06:06.770 ] 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "scsi", 00:06:06.770 "config": null 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "scheduler", 00:06:06.770 "config": [ 00:06:06.770 { 00:06:06.770 "method": "framework_set_scheduler", 00:06:06.770 "params": { 00:06:06.770 "name": "static" 00:06:06.770 } 00:06:06.770 } 00:06:06.770 ] 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "vhost_scsi", 00:06:06.770 "config": [] 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "vhost_blk", 00:06:06.770 "config": [] 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "ublk", 00:06:06.770 "config": [] 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "nbd", 00:06:06.770 "config": [] 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "nvmf", 00:06:06.770 "config": [ 00:06:06.770 { 00:06:06.770 "method": "nvmf_set_config", 00:06:06.770 "params": { 00:06:06.770 "discovery_filter": "match_any", 00:06:06.770 "admin_cmd_passthru": { 00:06:06.770 "identify_ctrlr": false 00:06:06.770 } 00:06:06.770 } 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "method": "nvmf_set_max_subsystems", 00:06:06.770 "params": { 00:06:06.770 "max_subsystems": 1024 00:06:06.770 } 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "method": "nvmf_set_crdt", 00:06:06.770 "params": { 00:06:06.770 "crdt1": 0, 00:06:06.770 "crdt2": 0, 00:06:06.770 "crdt3": 0 00:06:06.770 } 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "method": "nvmf_create_transport", 00:06:06.770 "params": { 00:06:06.770 "trtype": "TCP", 00:06:06.770 "max_queue_depth": 128, 00:06:06.770 "max_io_qpairs_per_ctrlr": 127, 00:06:06.770 "in_capsule_data_size": 4096, 00:06:06.770 "max_io_size": 131072, 00:06:06.770 "io_unit_size": 131072, 00:06:06.770 "max_aq_depth": 128, 00:06:06.770 "num_shared_buffers": 511, 00:06:06.770 "buf_cache_size": 4294967295, 00:06:06.770 "dif_insert_or_strip": false, 00:06:06.770 "zcopy": false, 00:06:06.770 "c2h_success": true, 00:06:06.770 "sock_priority": 0, 00:06:06.770 "abort_timeout_sec": 1, 00:06:06.770 "ack_timeout": 0, 00:06:06.770 "data_wr_pool_size": 0 00:06:06.770 } 00:06:06.770 } 00:06:06.770 ] 00:06:06.770 }, 00:06:06.770 { 00:06:06.770 "subsystem": "iscsi", 00:06:06.770 "config": [ 00:06:06.770 { 00:06:06.770 "method": "iscsi_set_options", 00:06:06.770 "params": { 00:06:06.770 "node_base": "iqn.2016-06.io.spdk", 00:06:06.770 "max_sessions": 128, 00:06:06.770 "max_connections_per_session": 2, 00:06:06.770 "max_queue_depth": 64, 00:06:06.770 "default_time2wait": 2, 00:06:06.770 "default_time2retain": 20, 00:06:06.770 "first_burst_length": 8192, 00:06:06.770 "immediate_data": true, 00:06:06.770 "allow_duplicated_isid": false, 00:06:06.770 "error_recovery_level": 0, 00:06:06.770 "nop_timeout": 60, 00:06:06.770 "nop_in_interval": 30, 00:06:06.770 "disable_chap": false, 00:06:06.770 "require_chap": false, 00:06:06.770 "mutual_chap": false, 00:06:06.770 "chap_group": 0, 00:06:06.770 "max_large_datain_per_connection": 64, 00:06:06.770 "max_r2t_per_connection": 4, 00:06:06.770 "pdu_pool_size": 36864, 00:06:06.770 "immediate_data_pool_size": 16384, 00:06:06.770 "data_out_pool_size": 2048 00:06:06.770 } 00:06:06.770 } 00:06:06.770 ] 00:06:06.770 } 00:06:06.770 ] 00:06:06.770 } 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61682 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 61682 ']' 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 61682 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61682 00:06:06.770 killing process with pid 61682 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61682' 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 61682 00:06:06.770 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 61682 00:06:09.300 17:58:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61733 00:06:09.300 17:58:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:09.300 17:58:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61733 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 61733 ']' 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 61733 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61733 00:06:14.567 killing process with pid 61733 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61733' 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 61733 00:06:14.567 17:58:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 61733 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:16.468 00:06:16.468 real 0m10.950s 00:06:16.468 user 0m10.346s 00:06:16.468 sys 0m0.987s 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.468 ************************************ 00:06:16.468 END TEST skip_rpc_with_json 00:06:16.468 ************************************ 00:06:16.468 17:58:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:16.468 17:58:08 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.468 17:58:08 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.468 17:58:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.468 ************************************ 00:06:16.468 START TEST skip_rpc_with_delay 00:06:16.468 ************************************ 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:16.468 [2024-05-15 17:58:08.720168] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:16.468 [2024-05-15 17:58:08.720359] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.468 00:06:16.468 real 0m0.177s 00:06:16.468 user 0m0.093s 00:06:16.468 sys 0m0.082s 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.468 ************************************ 00:06:16.468 END TEST skip_rpc_with_delay 00:06:16.468 ************************************ 00:06:16.468 17:58:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:16.468 17:58:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:16.468 17:58:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:16.468 17:58:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:16.468 17:58:08 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.468 17:58:08 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.468 17:58:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.468 ************************************ 00:06:16.468 START TEST exit_on_failed_rpc_init 00:06:16.468 ************************************ 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:16.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61861 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61861 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 61861 ']' 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.468 17:58:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.468 [2024-05-15 17:58:08.951214] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:16.468 [2024-05-15 17:58:08.951374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61861 ] 00:06:16.726 [2024-05-15 17:58:09.112193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.985 [2024-05-15 17:58:09.349821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:17.919 17:58:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.919 [2024-05-15 17:58:10.249349] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:17.919 [2024-05-15 17:58:10.249516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61890 ] 00:06:17.919 [2024-05-15 17:58:10.416327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.485 [2024-05-15 17:58:10.707228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.485 [2024-05-15 17:58:10.707359] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:18.485 [2024-05-15 17:58:10.707383] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:18.485 [2024-05-15 17:58:10.707413] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61861 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 61861 ']' 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 61861 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61861 00:06:18.744 killing process with pid 61861 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61861' 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 61861 00:06:18.744 17:58:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 61861 00:06:21.277 00:06:21.277 real 0m4.473s 00:06:21.277 user 0m5.060s 00:06:21.277 sys 0m0.631s 00:06:21.277 17:58:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.277 17:58:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.277 ************************************ 00:06:21.277 END TEST exit_on_failed_rpc_init 00:06:21.277 ************************************ 00:06:21.277 17:58:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.277 00:06:21.277 real 0m23.078s 00:06:21.277 user 0m22.248s 00:06:21.277 sys 0m2.309s 00:06:21.277 17:58:13 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.277 17:58:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.277 ************************************ 00:06:21.277 END TEST skip_rpc 00:06:21.277 ************************************ 00:06:21.277 17:58:13 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:21.277 17:58:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.277 17:58:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.277 17:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:21.277 ************************************ 00:06:21.277 START TEST rpc_client 00:06:21.277 ************************************ 00:06:21.277 17:58:13 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:21.277 * Looking for test storage... 00:06:21.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:21.277 17:58:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:21.277 OK 00:06:21.277 17:58:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:21.277 00:06:21.277 real 0m0.133s 00:06:21.277 user 0m0.065s 00:06:21.277 sys 0m0.074s 00:06:21.277 17:58:13 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.277 17:58:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:21.277 ************************************ 00:06:21.277 END TEST rpc_client 00:06:21.277 ************************************ 00:06:21.277 17:58:13 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.277 17:58:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.277 17:58:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.277 17:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:21.277 ************************************ 00:06:21.277 START TEST json_config 00:06:21.277 ************************************ 00:06:21.277 17:58:13 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.277 17:58:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d392595-b32d-4fb6-a9ae-a7286ece9269 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2d392595-b32d-4fb6-a9ae-a7286ece9269 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.277 17:58:13 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.277 17:58:13 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.277 17:58:13 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.277 17:58:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.277 17:58:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.277 17:58:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.277 17:58:13 json_config -- paths/export.sh@5 -- # export PATH 00:06:21.277 17:58:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@47 -- # : 0 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.277 17:58:13 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.278 17:58:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.278 17:58:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.278 17:58:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.278 17:58:13 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.278 17:58:13 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.278 17:58:13 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.278 17:58:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:21.278 17:58:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:21.278 WARNING: No tests are enabled so not running JSON configuration tests 00:06:21.278 17:58:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:21.278 17:58:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:21.278 17:58:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:21.278 17:58:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:21.278 17:58:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:21.278 00:06:21.278 real 0m0.083s 00:06:21.278 user 0m0.032s 00:06:21.278 sys 0m0.048s 00:06:21.278 ************************************ 00:06:21.278 END TEST json_config 00:06:21.278 ************************************ 00:06:21.278 17:58:13 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.278 17:58:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.278 17:58:13 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:21.278 17:58:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.278 17:58:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.278 17:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:21.278 ************************************ 00:06:21.278 START TEST json_config_extra_key 00:06:21.278 ************************************ 00:06:21.278 17:58:13 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d392595-b32d-4fb6-a9ae-a7286ece9269 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2d392595-b32d-4fb6-a9ae-a7286ece9269 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.278 17:58:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.278 17:58:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.278 17:58:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.278 17:58:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.278 17:58:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.278 17:58:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.278 17:58:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:21.278 17:58:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.278 17:58:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:21.278 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:21.537 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:21.537 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:21.537 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.537 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:21.537 INFO: launching applications... 00:06:21.537 17:58:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62065 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.537 Waiting for target to run... 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62065 /var/tmp/spdk_tgt.sock 00:06:21.537 17:58:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:21.537 17:58:13 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 62065 ']' 00:06:21.537 17:58:13 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.537 17:58:13 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.537 17:58:13 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.537 17:58:13 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.537 17:58:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.537 [2024-05-15 17:58:13.913499] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:21.537 [2024-05-15 17:58:13.913842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62065 ] 00:06:22.105 [2024-05-15 17:58:14.348427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.105 [2024-05-15 17:58:14.560809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.684 00:06:22.684 INFO: shutting down applications... 00:06:22.684 17:58:15 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.684 17:58:15 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:22.684 17:58:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:22.684 17:58:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62065 ]] 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62065 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62065 00:06:22.684 17:58:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.251 17:58:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.251 17:58:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.251 17:58:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62065 00:06:23.251 17:58:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.838 17:58:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.838 17:58:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.838 17:58:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62065 00:06:23.838 17:58:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.402 17:58:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.402 17:58:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.402 17:58:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62065 00:06:24.402 17:58:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.803 17:58:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.803 17:58:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.803 17:58:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62065 00:06:24.803 17:58:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.369 17:58:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.369 17:58:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.369 17:58:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62065 00:06:25.369 17:58:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.935 17:58:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.935 17:58:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.935 17:58:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62065 00:06:25.935 SPDK target shutdown done 00:06:25.935 Success 00:06:25.935 17:58:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:25.935 17:58:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:25.935 17:58:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:25.935 17:58:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:25.935 17:58:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:25.935 00:06:25.935 real 0m4.511s 00:06:25.935 user 0m3.851s 00:06:25.935 sys 0m0.589s 00:06:25.935 ************************************ 00:06:25.935 END TEST json_config_extra_key 00:06:25.935 ************************************ 00:06:25.935 17:58:18 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.935 17:58:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.935 17:58:18 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:25.935 17:58:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.935 17:58:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.935 17:58:18 -- common/autotest_common.sh@10 -- # set +x 00:06:25.935 ************************************ 00:06:25.935 START TEST alias_rpc 00:06:25.935 ************************************ 00:06:25.935 17:58:18 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:25.935 * Looking for test storage... 00:06:25.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:25.935 17:58:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.935 17:58:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62169 00:06:25.935 17:58:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.935 17:58:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62169 00:06:25.935 17:58:18 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 62169 ']' 00:06:25.935 17:58:18 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.935 17:58:18 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.935 17:58:18 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.935 17:58:18 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.935 17:58:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.196 [2024-05-15 17:58:18.460194] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:26.196 [2024-05-15 17:58:18.460504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:06:26.196 [2024-05-15 17:58:18.627280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.454 [2024-05-15 17:58:18.889893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.389 17:58:19 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.389 17:58:19 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:27.389 17:58:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:27.647 17:58:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62169 00:06:27.647 17:58:19 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 62169 ']' 00:06:27.647 17:58:19 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 62169 00:06:27.647 17:58:19 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:27.647 17:58:19 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.647 17:58:19 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62169 00:06:27.647 killing process with pid 62169 00:06:27.647 17:58:19 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.648 17:58:19 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.648 17:58:19 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62169' 00:06:27.648 17:58:19 alias_rpc -- common/autotest_common.sh@965 -- # kill 62169 00:06:27.648 17:58:19 alias_rpc -- common/autotest_common.sh@970 -- # wait 62169 00:06:30.213 ************************************ 00:06:30.213 END TEST alias_rpc 00:06:30.213 ************************************ 00:06:30.213 00:06:30.213 real 0m3.917s 00:06:30.213 user 0m4.014s 00:06:30.213 sys 0m0.566s 00:06:30.213 17:58:22 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.213 17:58:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.213 17:58:22 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:30.213 17:58:22 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:30.213 17:58:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.213 17:58:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.213 17:58:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.213 ************************************ 00:06:30.213 START TEST spdkcli_tcp 00:06:30.213 ************************************ 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:30.213 * Looking for test storage... 00:06:30.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62262 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62262 00:06:30.213 17:58:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:30.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 62262 ']' 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.213 17:58:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.213 [2024-05-15 17:58:22.433282] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:30.213 [2024-05-15 17:58:22.433493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62262 ] 00:06:30.213 [2024-05-15 17:58:22.604495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.471 [2024-05-15 17:58:22.843370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.471 [2024-05-15 17:58:22.843389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.404 17:58:23 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.404 17:58:23 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:31.404 17:58:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:31.404 17:58:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62285 00:06:31.404 17:58:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:31.404 [ 00:06:31.404 "bdev_malloc_delete", 00:06:31.404 "bdev_malloc_create", 00:06:31.404 "bdev_null_resize", 00:06:31.404 "bdev_null_delete", 00:06:31.404 "bdev_null_create", 00:06:31.404 "bdev_nvme_cuse_unregister", 00:06:31.404 "bdev_nvme_cuse_register", 00:06:31.404 "bdev_opal_new_user", 00:06:31.404 "bdev_opal_set_lock_state", 00:06:31.404 "bdev_opal_delete", 00:06:31.404 "bdev_opal_get_info", 00:06:31.404 "bdev_opal_create", 00:06:31.404 "bdev_nvme_opal_revert", 00:06:31.404 "bdev_nvme_opal_init", 00:06:31.404 "bdev_nvme_send_cmd", 00:06:31.404 "bdev_nvme_get_path_iostat", 00:06:31.404 "bdev_nvme_get_mdns_discovery_info", 00:06:31.404 "bdev_nvme_stop_mdns_discovery", 00:06:31.404 "bdev_nvme_start_mdns_discovery", 00:06:31.404 "bdev_nvme_set_multipath_policy", 00:06:31.404 "bdev_nvme_set_preferred_path", 00:06:31.404 "bdev_nvme_get_io_paths", 00:06:31.404 "bdev_nvme_remove_error_injection", 00:06:31.404 "bdev_nvme_add_error_injection", 00:06:31.404 "bdev_nvme_get_discovery_info", 00:06:31.404 "bdev_nvme_stop_discovery", 00:06:31.404 "bdev_nvme_start_discovery", 00:06:31.404 "bdev_nvme_get_controller_health_info", 00:06:31.404 "bdev_nvme_disable_controller", 00:06:31.404 "bdev_nvme_enable_controller", 00:06:31.404 "bdev_nvme_reset_controller", 00:06:31.404 "bdev_nvme_get_transport_statistics", 00:06:31.404 "bdev_nvme_apply_firmware", 00:06:31.404 "bdev_nvme_detach_controller", 00:06:31.404 "bdev_nvme_get_controllers", 00:06:31.404 "bdev_nvme_attach_controller", 00:06:31.404 "bdev_nvme_set_hotplug", 00:06:31.404 "bdev_nvme_set_options", 00:06:31.404 "bdev_passthru_delete", 00:06:31.404 "bdev_passthru_create", 00:06:31.404 "bdev_lvol_check_shallow_copy", 00:06:31.404 "bdev_lvol_start_shallow_copy", 00:06:31.404 "bdev_lvol_grow_lvstore", 00:06:31.404 "bdev_lvol_get_lvols", 00:06:31.404 "bdev_lvol_get_lvstores", 00:06:31.404 "bdev_lvol_delete", 00:06:31.404 "bdev_lvol_set_read_only", 00:06:31.404 "bdev_lvol_resize", 00:06:31.404 "bdev_lvol_decouple_parent", 00:06:31.404 "bdev_lvol_inflate", 00:06:31.404 "bdev_lvol_rename", 00:06:31.404 "bdev_lvol_clone_bdev", 00:06:31.404 "bdev_lvol_clone", 00:06:31.404 "bdev_lvol_snapshot", 00:06:31.404 "bdev_lvol_create", 00:06:31.404 "bdev_lvol_delete_lvstore", 00:06:31.404 "bdev_lvol_rename_lvstore", 00:06:31.404 "bdev_lvol_create_lvstore", 00:06:31.404 "bdev_raid_set_options", 00:06:31.404 "bdev_raid_remove_base_bdev", 00:06:31.404 "bdev_raid_add_base_bdev", 00:06:31.404 "bdev_raid_delete", 00:06:31.404 "bdev_raid_create", 00:06:31.404 "bdev_raid_get_bdevs", 00:06:31.404 "bdev_error_inject_error", 00:06:31.404 "bdev_error_delete", 00:06:31.404 "bdev_error_create", 00:06:31.404 "bdev_split_delete", 00:06:31.404 "bdev_split_create", 00:06:31.404 "bdev_delay_delete", 00:06:31.404 "bdev_delay_create", 00:06:31.404 "bdev_delay_update_latency", 00:06:31.404 "bdev_zone_block_delete", 00:06:31.404 "bdev_zone_block_create", 00:06:31.404 "blobfs_create", 00:06:31.404 "blobfs_detect", 00:06:31.404 "blobfs_set_cache_size", 00:06:31.404 "bdev_xnvme_delete", 00:06:31.404 "bdev_xnvme_create", 00:06:31.404 "bdev_aio_delete", 00:06:31.404 "bdev_aio_rescan", 00:06:31.404 "bdev_aio_create", 00:06:31.404 "bdev_ftl_set_property", 00:06:31.404 "bdev_ftl_get_properties", 00:06:31.404 "bdev_ftl_get_stats", 00:06:31.404 "bdev_ftl_unmap", 00:06:31.404 "bdev_ftl_unload", 00:06:31.404 "bdev_ftl_delete", 00:06:31.404 "bdev_ftl_load", 00:06:31.404 "bdev_ftl_create", 00:06:31.404 "bdev_virtio_attach_controller", 00:06:31.404 "bdev_virtio_scsi_get_devices", 00:06:31.404 "bdev_virtio_detach_controller", 00:06:31.404 "bdev_virtio_blk_set_hotplug", 00:06:31.404 "bdev_iscsi_delete", 00:06:31.404 "bdev_iscsi_create", 00:06:31.404 "bdev_iscsi_set_options", 00:06:31.404 "accel_error_inject_error", 00:06:31.404 "ioat_scan_accel_module", 00:06:31.404 "dsa_scan_accel_module", 00:06:31.404 "iaa_scan_accel_module", 00:06:31.404 "keyring_file_remove_key", 00:06:31.404 "keyring_file_add_key", 00:06:31.404 "iscsi_get_histogram", 00:06:31.404 "iscsi_enable_histogram", 00:06:31.404 "iscsi_set_options", 00:06:31.404 "iscsi_get_auth_groups", 00:06:31.404 "iscsi_auth_group_remove_secret", 00:06:31.404 "iscsi_auth_group_add_secret", 00:06:31.404 "iscsi_delete_auth_group", 00:06:31.404 "iscsi_create_auth_group", 00:06:31.404 "iscsi_set_discovery_auth", 00:06:31.404 "iscsi_get_options", 00:06:31.404 "iscsi_target_node_request_logout", 00:06:31.404 "iscsi_target_node_set_redirect", 00:06:31.404 "iscsi_target_node_set_auth", 00:06:31.404 "iscsi_target_node_add_lun", 00:06:31.405 "iscsi_get_stats", 00:06:31.405 "iscsi_get_connections", 00:06:31.405 "iscsi_portal_group_set_auth", 00:06:31.405 "iscsi_start_portal_group", 00:06:31.405 "iscsi_delete_portal_group", 00:06:31.405 "iscsi_create_portal_group", 00:06:31.405 "iscsi_get_portal_groups", 00:06:31.405 "iscsi_delete_target_node", 00:06:31.405 "iscsi_target_node_remove_pg_ig_maps", 00:06:31.405 "iscsi_target_node_add_pg_ig_maps", 00:06:31.405 "iscsi_create_target_node", 00:06:31.405 "iscsi_get_target_nodes", 00:06:31.405 "iscsi_delete_initiator_group", 00:06:31.405 "iscsi_initiator_group_remove_initiators", 00:06:31.405 "iscsi_initiator_group_add_initiators", 00:06:31.405 "iscsi_create_initiator_group", 00:06:31.405 "iscsi_get_initiator_groups", 00:06:31.405 "nvmf_set_crdt", 00:06:31.405 "nvmf_set_config", 00:06:31.405 "nvmf_set_max_subsystems", 00:06:31.405 "nvmf_stop_mdns_prr", 00:06:31.405 "nvmf_publish_mdns_prr", 00:06:31.405 "nvmf_subsystem_get_listeners", 00:06:31.405 "nvmf_subsystem_get_qpairs", 00:06:31.405 "nvmf_subsystem_get_controllers", 00:06:31.405 "nvmf_get_stats", 00:06:31.405 "nvmf_get_transports", 00:06:31.405 "nvmf_create_transport", 00:06:31.405 "nvmf_get_targets", 00:06:31.405 "nvmf_delete_target", 00:06:31.405 "nvmf_create_target", 00:06:31.405 "nvmf_subsystem_allow_any_host", 00:06:31.405 "nvmf_subsystem_remove_host", 00:06:31.405 "nvmf_subsystem_add_host", 00:06:31.405 "nvmf_ns_remove_host", 00:06:31.405 "nvmf_ns_add_host", 00:06:31.405 "nvmf_subsystem_remove_ns", 00:06:31.405 "nvmf_subsystem_add_ns", 00:06:31.405 "nvmf_subsystem_listener_set_ana_state", 00:06:31.405 "nvmf_discovery_get_referrals", 00:06:31.405 "nvmf_discovery_remove_referral", 00:06:31.405 "nvmf_discovery_add_referral", 00:06:31.405 "nvmf_subsystem_remove_listener", 00:06:31.405 "nvmf_subsystem_add_listener", 00:06:31.405 "nvmf_delete_subsystem", 00:06:31.405 "nvmf_create_subsystem", 00:06:31.405 "nvmf_get_subsystems", 00:06:31.405 "env_dpdk_get_mem_stats", 00:06:31.405 "nbd_get_disks", 00:06:31.405 "nbd_stop_disk", 00:06:31.405 "nbd_start_disk", 00:06:31.405 "ublk_recover_disk", 00:06:31.405 "ublk_get_disks", 00:06:31.405 "ublk_stop_disk", 00:06:31.405 "ublk_start_disk", 00:06:31.405 "ublk_destroy_target", 00:06:31.405 "ublk_create_target", 00:06:31.405 "virtio_blk_create_transport", 00:06:31.405 "virtio_blk_get_transports", 00:06:31.405 "vhost_controller_set_coalescing", 00:06:31.405 "vhost_get_controllers", 00:06:31.405 "vhost_delete_controller", 00:06:31.405 "vhost_create_blk_controller", 00:06:31.405 "vhost_scsi_controller_remove_target", 00:06:31.405 "vhost_scsi_controller_add_target", 00:06:31.405 "vhost_start_scsi_controller", 00:06:31.405 "vhost_create_scsi_controller", 00:06:31.405 "thread_set_cpumask", 00:06:31.405 "framework_get_scheduler", 00:06:31.405 "framework_set_scheduler", 00:06:31.405 "framework_get_reactors", 00:06:31.405 "thread_get_io_channels", 00:06:31.405 "thread_get_pollers", 00:06:31.405 "thread_get_stats", 00:06:31.405 "framework_monitor_context_switch", 00:06:31.405 "spdk_kill_instance", 00:06:31.405 "log_enable_timestamps", 00:06:31.405 "log_get_flags", 00:06:31.405 "log_clear_flag", 00:06:31.405 "log_set_flag", 00:06:31.405 "log_get_level", 00:06:31.405 "log_set_level", 00:06:31.405 "log_get_print_level", 00:06:31.405 "log_set_print_level", 00:06:31.405 "framework_enable_cpumask_locks", 00:06:31.405 "framework_disable_cpumask_locks", 00:06:31.405 "framework_wait_init", 00:06:31.405 "framework_start_init", 00:06:31.405 "scsi_get_devices", 00:06:31.405 "bdev_get_histogram", 00:06:31.405 "bdev_enable_histogram", 00:06:31.405 "bdev_set_qos_limit", 00:06:31.405 "bdev_set_qd_sampling_period", 00:06:31.405 "bdev_get_bdevs", 00:06:31.405 "bdev_reset_iostat", 00:06:31.405 "bdev_get_iostat", 00:06:31.405 "bdev_examine", 00:06:31.405 "bdev_wait_for_examine", 00:06:31.405 "bdev_set_options", 00:06:31.405 "notify_get_notifications", 00:06:31.405 "notify_get_types", 00:06:31.405 "accel_get_stats", 00:06:31.405 "accel_set_options", 00:06:31.405 "accel_set_driver", 00:06:31.405 "accel_crypto_key_destroy", 00:06:31.405 "accel_crypto_keys_get", 00:06:31.405 "accel_crypto_key_create", 00:06:31.405 "accel_assign_opc", 00:06:31.405 "accel_get_module_info", 00:06:31.405 "accel_get_opc_assignments", 00:06:31.405 "vmd_rescan", 00:06:31.405 "vmd_remove_device", 00:06:31.405 "vmd_enable", 00:06:31.405 "sock_get_default_impl", 00:06:31.405 "sock_set_default_impl", 00:06:31.405 "sock_impl_set_options", 00:06:31.405 "sock_impl_get_options", 00:06:31.405 "iobuf_get_stats", 00:06:31.405 "iobuf_set_options", 00:06:31.405 "framework_get_pci_devices", 00:06:31.405 "framework_get_config", 00:06:31.405 "framework_get_subsystems", 00:06:31.405 "trace_get_info", 00:06:31.405 "trace_get_tpoint_group_mask", 00:06:31.405 "trace_disable_tpoint_group", 00:06:31.405 "trace_enable_tpoint_group", 00:06:31.405 "trace_clear_tpoint_mask", 00:06:31.405 "trace_set_tpoint_mask", 00:06:31.405 "keyring_get_keys", 00:06:31.405 "spdk_get_version", 00:06:31.405 "rpc_get_methods" 00:06:31.405 ] 00:06:31.405 17:58:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:31.405 17:58:23 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.405 17:58:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.663 17:58:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:31.663 17:58:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62262 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 62262 ']' 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 62262 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62262 00:06:31.663 killing process with pid 62262 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62262' 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 62262 00:06:31.663 17:58:23 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 62262 00:06:34.194 ************************************ 00:06:34.194 END TEST spdkcli_tcp 00:06:34.194 ************************************ 00:06:34.194 00:06:34.194 real 0m3.940s 00:06:34.194 user 0m6.876s 00:06:34.194 sys 0m0.623s 00:06:34.194 17:58:26 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.194 17:58:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.194 17:58:26 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.194 17:58:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.195 17:58:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.195 17:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.195 ************************************ 00:06:34.195 START TEST dpdk_mem_utility 00:06:34.195 ************************************ 00:06:34.195 17:58:26 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.195 * Looking for test storage... 00:06:34.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:34.195 17:58:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:34.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.195 17:58:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62376 00:06:34.195 17:58:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.195 17:58:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62376 00:06:34.195 17:58:26 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 62376 ']' 00:06:34.195 17:58:26 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.195 17:58:26 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.195 17:58:26 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.195 17:58:26 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.195 17:58:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.195 [2024-05-15 17:58:26.397765] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:34.195 [2024-05-15 17:58:26.397937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62376 ] 00:06:34.195 [2024-05-15 17:58:26.562603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.453 [2024-05-15 17:58:26.797116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.388 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.388 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:35.388 17:58:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:35.388 17:58:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:35.388 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.388 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.388 { 00:06:35.388 "filename": "/tmp/spdk_mem_dump.txt" 00:06:35.388 } 00:06:35.388 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.388 17:58:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.388 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:35.388 1 heaps totaling size 820.000000 MiB 00:06:35.388 size: 820.000000 MiB heap id: 0 00:06:35.388 end heaps---------- 00:06:35.388 8 mempools totaling size 598.116089 MiB 00:06:35.388 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:35.388 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:35.388 size: 84.521057 MiB name: bdev_io_62376 00:06:35.388 size: 51.011292 MiB name: evtpool_62376 00:06:35.388 size: 50.003479 MiB name: msgpool_62376 00:06:35.388 size: 21.763794 MiB name: PDU_Pool 00:06:35.388 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:35.388 size: 0.026123 MiB name: Session_Pool 00:06:35.388 end mempools------- 00:06:35.388 6 memzones totaling size 4.142822 MiB 00:06:35.388 size: 1.000366 MiB name: RG_ring_0_62376 00:06:35.388 size: 1.000366 MiB name: RG_ring_1_62376 00:06:35.388 size: 1.000366 MiB name: RG_ring_4_62376 00:06:35.388 size: 1.000366 MiB name: RG_ring_5_62376 00:06:35.388 size: 0.125366 MiB name: RG_ring_2_62376 00:06:35.388 size: 0.015991 MiB name: RG_ring_3_62376 00:06:35.388 end memzones------- 00:06:35.388 17:58:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:35.388 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:06:35.388 list of free elements. size: 18.451538 MiB 00:06:35.388 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:35.388 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:35.388 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:35.388 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:35.388 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:35.388 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:35.388 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:35.388 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:35.388 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:35.388 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:35.388 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:35.388 element at address: 0x200000200000 with size: 0.829956 MiB 00:06:35.388 element at address: 0x20001b000000 with size: 0.564148 MiB 00:06:35.388 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:35.388 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:35.388 element at address: 0x200013800000 with size: 0.467896 MiB 00:06:35.389 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:35.389 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:35.389 list of standard malloc elements. size: 199.284058 MiB 00:06:35.389 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:35.389 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:35.389 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:35.389 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:35.389 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:35.389 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:35.389 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:35.389 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:35.389 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:35.389 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:35.389 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:35.389 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:35.389 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:35.390 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:35.390 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:35.390 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:35.390 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:35.390 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:35.391 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:35.391 list of memzone associated elements. size: 602.264404 MiB 00:06:35.391 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:35.391 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:35.391 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:35.391 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:35.391 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:35.391 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62376_0 00:06:35.391 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:35.391 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62376_0 00:06:35.391 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:35.391 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62376_0 00:06:35.391 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:35.391 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:35.391 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:35.391 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:35.391 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:35.391 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62376 00:06:35.391 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:35.391 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62376 00:06:35.391 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:35.391 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62376 00:06:35.391 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:35.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:35.391 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:35.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:35.391 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:35.391 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:35.391 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:35.391 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:35.391 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:35.391 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62376 00:06:35.391 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:35.391 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62376 00:06:35.391 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:35.391 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62376 00:06:35.391 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:35.391 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62376 00:06:35.391 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:35.391 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62376 00:06:35.391 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:35.391 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:35.391 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:35.391 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:35.391 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:35.391 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:35.391 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:35.391 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62376 00:06:35.391 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:35.391 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:35.391 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:35.391 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:35.391 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:35.391 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62376 00:06:35.391 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:35.391 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:35.392 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:35.392 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62376 00:06:35.392 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:35.392 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62376 00:06:35.392 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:35.392 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:35.392 17:58:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:35.392 17:58:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62376 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 62376 ']' 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 62376 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62376 00:06:35.392 killing process with pid 62376 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62376' 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 62376 00:06:35.392 17:58:27 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 62376 00:06:37.921 ************************************ 00:06:37.921 END TEST dpdk_mem_utility 00:06:37.921 ************************************ 00:06:37.921 00:06:37.921 real 0m3.698s 00:06:37.921 user 0m3.684s 00:06:37.921 sys 0m0.586s 00:06:37.921 17:58:29 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.921 17:58:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.921 17:58:29 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:37.921 17:58:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.921 17:58:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.921 17:58:29 -- common/autotest_common.sh@10 -- # set +x 00:06:37.921 ************************************ 00:06:37.921 START TEST event 00:06:37.921 ************************************ 00:06:37.921 17:58:29 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:37.921 * Looking for test storage... 00:06:37.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:37.921 17:58:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:37.921 17:58:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:37.921 17:58:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.921 17:58:30 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:37.921 17:58:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.921 17:58:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.921 ************************************ 00:06:37.921 START TEST event_perf 00:06:37.921 ************************************ 00:06:37.921 17:58:30 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.921 Running I/O for 1 seconds...[2024-05-15 17:58:30.100274] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:37.921 [2024-05-15 17:58:30.100634] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62476 ] 00:06:37.921 [2024-05-15 17:58:30.264648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.179 [2024-05-15 17:58:30.502403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.179 [2024-05-15 17:58:30.502513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.179 [2024-05-15 17:58:30.503138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.179 [2024-05-15 17:58:30.503159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.581 Running I/O for 1 seconds... 00:06:39.581 lcore 0: 191361 00:06:39.581 lcore 1: 191360 00:06:39.581 lcore 2: 191359 00:06:39.581 lcore 3: 191360 00:06:39.581 done. 00:06:39.581 00:06:39.581 ************************************ 00:06:39.581 END TEST event_perf 00:06:39.581 ************************************ 00:06:39.581 real 0m1.814s 00:06:39.581 user 0m4.560s 00:06:39.581 sys 0m0.129s 00:06:39.581 17:58:31 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.581 17:58:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 17:58:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:39.581 17:58:31 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:39.581 17:58:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.581 17:58:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 ************************************ 00:06:39.581 START TEST event_reactor 00:06:39.581 ************************************ 00:06:39.581 17:58:31 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:39.581 [2024-05-15 17:58:31.969738] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:39.581 [2024-05-15 17:58:31.969884] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62516 ] 00:06:39.841 [2024-05-15 17:58:32.133136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.103 [2024-05-15 17:58:32.368764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.479 test_start 00:06:41.479 oneshot 00:06:41.479 tick 100 00:06:41.479 tick 100 00:06:41.479 tick 250 00:06:41.479 tick 100 00:06:41.479 tick 100 00:06:41.479 tick 100 00:06:41.479 tick 250 00:06:41.479 tick 500 00:06:41.479 tick 100 00:06:41.479 tick 100 00:06:41.479 tick 250 00:06:41.479 tick 100 00:06:41.479 tick 100 00:06:41.479 test_end 00:06:41.479 00:06:41.479 real 0m1.796s 00:06:41.479 user 0m1.574s 00:06:41.479 sys 0m0.113s 00:06:41.479 17:58:33 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.479 17:58:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:41.479 ************************************ 00:06:41.479 END TEST event_reactor 00:06:41.479 ************************************ 00:06:41.479 17:58:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.479 17:58:33 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:41.479 17:58:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.479 17:58:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.479 ************************************ 00:06:41.479 START TEST event_reactor_perf 00:06:41.479 ************************************ 00:06:41.479 17:58:33 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.479 [2024-05-15 17:58:33.822211] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:41.479 [2024-05-15 17:58:33.822418] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62558 ] 00:06:41.738 [2024-05-15 17:58:33.995689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.738 [2024-05-15 17:58:34.232281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.114 test_start 00:06:43.114 test_end 00:06:43.114 Performance: 303240 events per second 00:06:43.114 ************************************ 00:06:43.114 END TEST event_reactor_perf 00:06:43.114 ************************************ 00:06:43.114 00:06:43.114 real 0m1.793s 00:06:43.114 user 0m1.569s 00:06:43.114 sys 0m0.115s 00:06:43.114 17:58:35 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.114 17:58:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.114 17:58:35 event -- event/event.sh@49 -- # uname -s 00:06:43.373 17:58:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:43.373 17:58:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:43.373 17:58:35 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.373 17:58:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.373 17:58:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.373 ************************************ 00:06:43.373 START TEST event_scheduler 00:06:43.373 ************************************ 00:06:43.373 17:58:35 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:43.373 * Looking for test storage... 00:06:43.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:43.373 17:58:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:43.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.373 17:58:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62626 00:06:43.373 17:58:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.373 17:58:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:43.373 17:58:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62626 00:06:43.373 17:58:35 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 62626 ']' 00:06:43.373 17:58:35 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.373 17:58:35 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.373 17:58:35 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.373 17:58:35 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.373 17:58:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.373 [2024-05-15 17:58:35.789686] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:43.373 [2024-05-15 17:58:35.790129] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62626 ] 00:06:43.633 [2024-05-15 17:58:35.956104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.891 [2024-05-15 17:58:36.221609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.891 [2024-05-15 17:58:36.221757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.891 [2024-05-15 17:58:36.221912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.891 [2024-05-15 17:58:36.221928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.458 17:58:36 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.458 17:58:36 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:44.458 17:58:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:44.458 17:58:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.458 17:58:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.458 POWER: Env isn't set yet! 00:06:44.458 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:44.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.458 POWER: Cannot set governor of lcore 0 to userspace 00:06:44.458 POWER: Attempting to initialise PSTAT power management... 00:06:44.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.458 POWER: Cannot set governor of lcore 0 to performance 00:06:44.458 POWER: Attempting to initialise AMD PSTATE power management... 00:06:44.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.458 POWER: Cannot set governor of lcore 0 to userspace 00:06:44.458 POWER: Attempting to initialise CPPC power management... 00:06:44.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.458 POWER: Cannot set governor of lcore 0 to userspace 00:06:44.458 POWER: Attempting to initialise VM power management... 00:06:44.458 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:44.458 POWER: Unable to set Power Management Environment for lcore 0 00:06:44.458 [2024-05-15 17:58:36.759811] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:44.458 [2024-05-15 17:58:36.759837] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:44.458 [2024-05-15 17:58:36.759851] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:44.458 17:58:36 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.458 17:58:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:44.458 17:58:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.458 17:58:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 [2024-05-15 17:58:37.068652] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:44.718 17:58:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:44.718 17:58:37 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.718 17:58:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 ************************************ 00:06:44.718 START TEST scheduler_create_thread 00:06:44.718 ************************************ 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 2 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 3 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 4 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 5 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 6 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 7 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 8 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 9 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 10 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.718 17:58:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.095 ************************************ 00:06:46.095 END TEST scheduler_create_thread 00:06:46.095 ************************************ 00:06:46.095 17:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.095 00:06:46.095 real 0m1.177s 00:06:46.095 user 0m0.018s 00:06:46.095 sys 0m0.006s 00:06:46.095 17:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.095 17:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.095 17:58:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:46.095 17:58:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62626 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 62626 ']' 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 62626 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62626 00:06:46.095 killing process with pid 62626 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62626' 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 62626 00:06:46.095 17:58:38 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 62626 00:06:46.356 [2024-05-15 17:58:38.739622] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:47.775 ************************************ 00:06:47.775 END TEST event_scheduler 00:06:47.775 ************************************ 00:06:47.775 00:06:47.775 real 0m4.260s 00:06:47.775 user 0m7.162s 00:06:47.775 sys 0m0.504s 00:06:47.775 17:58:39 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.775 17:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.775 17:58:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:47.775 17:58:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:47.775 17:58:39 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.775 17:58:39 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.775 17:58:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.775 ************************************ 00:06:47.775 START TEST app_repeat 00:06:47.775 ************************************ 00:06:47.775 17:58:39 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:47.775 Process app_repeat pid: 62721 00:06:47.775 spdk_app_start Round 0 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62721 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62721' 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:47.775 17:58:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62721 /var/tmp/spdk-nbd.sock 00:06:47.775 17:58:39 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62721 ']' 00:06:47.775 17:58:39 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.775 17:58:39 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.775 17:58:39 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.775 17:58:39 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.775 17:58:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.775 [2024-05-15 17:58:40.011860] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:47.776 [2024-05-15 17:58:40.012052] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62721 ] 00:06:47.776 [2024-05-15 17:58:40.189321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.034 [2024-05-15 17:58:40.432609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.034 [2024-05-15 17:58:40.432645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.602 17:58:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.602 17:58:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:48.602 17:58:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.861 Malloc0 00:06:48.861 17:58:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.119 Malloc1 00:06:49.119 17:58:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.119 17:58:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.379 /dev/nbd0 00:06:49.379 17:58:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.379 17:58:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.379 1+0 records in 00:06:49.379 1+0 records out 00:06:49.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488987 s, 8.4 MB/s 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:49.379 17:58:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:49.379 17:58:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.379 17:58:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.379 17:58:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.948 /dev/nbd1 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.948 1+0 records in 00:06:49.948 1+0 records out 00:06:49.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100647 s, 4.1 MB/s 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:49.948 17:58:42 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.948 { 00:06:49.948 "nbd_device": "/dev/nbd0", 00:06:49.948 "bdev_name": "Malloc0" 00:06:49.948 }, 00:06:49.948 { 00:06:49.948 "nbd_device": "/dev/nbd1", 00:06:49.948 "bdev_name": "Malloc1" 00:06:49.948 } 00:06:49.948 ]' 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.948 17:58:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.948 { 00:06:49.948 "nbd_device": "/dev/nbd0", 00:06:49.948 "bdev_name": "Malloc0" 00:06:49.948 }, 00:06:49.948 { 00:06:49.948 "nbd_device": "/dev/nbd1", 00:06:49.948 "bdev_name": "Malloc1" 00:06:49.948 } 00:06:49.948 ]' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.208 /dev/nbd1' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.208 /dev/nbd1' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.208 256+0 records in 00:06:50.208 256+0 records out 00:06:50.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100671 s, 104 MB/s 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.208 256+0 records in 00:06:50.208 256+0 records out 00:06:50.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274556 s, 38.2 MB/s 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.208 256+0 records in 00:06:50.208 256+0 records out 00:06:50.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0403272 s, 26.0 MB/s 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.208 17:58:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.468 17:58:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.726 17:58:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.985 17:58:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.985 17:58:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.985 17:58:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.245 17:58:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.245 17:58:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.504 17:58:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.881 [2024-05-15 17:58:45.115222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.881 [2024-05-15 17:58:45.338348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.881 [2024-05-15 17:58:45.338350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.139 [2024-05-15 17:58:45.527179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.139 [2024-05-15 17:58:45.527287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.517 spdk_app_start Round 1 00:06:54.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.517 17:58:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.517 17:58:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:54.517 17:58:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62721 /var/tmp/spdk-nbd.sock 00:06:54.517 17:58:46 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62721 ']' 00:06:54.517 17:58:46 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.517 17:58:46 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.517 17:58:46 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.517 17:58:46 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.517 17:58:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.776 17:58:47 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.776 17:58:47 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:54.776 17:58:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.034 Malloc0 00:06:55.293 17:58:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.552 Malloc1 00:06:55.552 17:58:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.552 17:58:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.811 /dev/nbd0 00:06:55.811 17:58:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.811 17:58:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.811 1+0 records in 00:06:55.811 1+0 records out 00:06:55.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296854 s, 13.8 MB/s 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:55.811 17:58:48 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:55.811 17:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.811 17:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.811 17:58:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.070 /dev/nbd1 00:06:56.329 17:58:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.329 17:58:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.329 1+0 records in 00:06:56.329 1+0 records out 00:06:56.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533419 s, 7.7 MB/s 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:56.329 17:58:48 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:56.329 17:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.329 17:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.329 17:58:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.329 17:58:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.330 17:58:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.588 17:58:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.588 { 00:06:56.589 "nbd_device": "/dev/nbd0", 00:06:56.589 "bdev_name": "Malloc0" 00:06:56.589 }, 00:06:56.589 { 00:06:56.589 "nbd_device": "/dev/nbd1", 00:06:56.589 "bdev_name": "Malloc1" 00:06:56.589 } 00:06:56.589 ]' 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.589 { 00:06:56.589 "nbd_device": "/dev/nbd0", 00:06:56.589 "bdev_name": "Malloc0" 00:06:56.589 }, 00:06:56.589 { 00:06:56.589 "nbd_device": "/dev/nbd1", 00:06:56.589 "bdev_name": "Malloc1" 00:06:56.589 } 00:06:56.589 ]' 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.589 /dev/nbd1' 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.589 /dev/nbd1' 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.589 256+0 records in 00:06:56.589 256+0 records out 00:06:56.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00801065 s, 131 MB/s 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.589 17:58:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.589 256+0 records in 00:06:56.589 256+0 records out 00:06:56.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323354 s, 32.4 MB/s 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.589 256+0 records in 00:06:56.589 256+0 records out 00:06:56.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0395212 s, 26.5 MB/s 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.589 17:58:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.168 17:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.168 17:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.168 17:58:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.168 17:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.169 17:58:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.431 17:58:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.690 17:58:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.690 17:58:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.690 17:58:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.690 17:58:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.690 17:58:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.258 17:58:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.636 [2024-05-15 17:58:51.720726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.636 [2024-05-15 17:58:51.963981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.636 [2024-05-15 17:58:51.963982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.894 [2024-05-15 17:58:52.157451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.894 [2024-05-15 17:58:52.157557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.336 spdk_app_start Round 2 00:07:01.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.336 17:58:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.336 17:58:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:01.336 17:58:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62721 /var/tmp/spdk-nbd.sock 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62721 ']' 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.336 17:58:53 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:01.336 17:58:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.595 Malloc0 00:07:01.595 17:58:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.854 Malloc1 00:07:01.854 17:58:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.854 17:58:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.421 /dev/nbd0 00:07:02.421 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.421 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.421 1+0 records in 00:07:02.421 1+0 records out 00:07:02.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235955 s, 17.4 MB/s 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:02.421 17:58:54 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:02.421 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.421 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.421 17:58:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.679 /dev/nbd1 00:07:02.679 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.679 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.679 1+0 records in 00:07:02.679 1+0 records out 00:07:02.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461313 s, 8.9 MB/s 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:02.679 17:58:54 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:02.679 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.679 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.679 17:58:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.679 17:58:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.679 17:58:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.938 { 00:07:02.938 "nbd_device": "/dev/nbd0", 00:07:02.938 "bdev_name": "Malloc0" 00:07:02.938 }, 00:07:02.938 { 00:07:02.938 "nbd_device": "/dev/nbd1", 00:07:02.938 "bdev_name": "Malloc1" 00:07:02.938 } 00:07:02.938 ]' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.938 { 00:07:02.938 "nbd_device": "/dev/nbd0", 00:07:02.938 "bdev_name": "Malloc0" 00:07:02.938 }, 00:07:02.938 { 00:07:02.938 "nbd_device": "/dev/nbd1", 00:07:02.938 "bdev_name": "Malloc1" 00:07:02.938 } 00:07:02.938 ]' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.938 /dev/nbd1' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.938 /dev/nbd1' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.938 256+0 records in 00:07:02.938 256+0 records out 00:07:02.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00828219 s, 127 MB/s 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.938 256+0 records in 00:07:02.938 256+0 records out 00:07:02.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276539 s, 37.9 MB/s 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.938 256+0 records in 00:07:02.938 256+0 records out 00:07:02.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379527 s, 27.6 MB/s 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.938 17:58:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.505 17:58:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.764 17:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.022 17:58:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.022 17:58:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.590 17:58:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.965 [2024-05-15 17:58:58.151591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.965 [2024-05-15 17:58:58.404580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.965 [2024-05-15 17:58:58.404585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.224 [2024-05-15 17:58:58.604728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.224 [2024-05-15 17:58:58.604843] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.599 17:58:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62721 /var/tmp/spdk-nbd.sock 00:07:07.599 17:58:59 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62721 ']' 00:07:07.599 17:58:59 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.599 17:58:59 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.599 17:58:59 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.599 17:58:59 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.599 17:58:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:07.870 17:59:00 event.app_repeat -- event/event.sh@39 -- # killprocess 62721 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 62721 ']' 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 62721 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62721 00:07:07.870 killing process with pid 62721 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62721' 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@965 -- # kill 62721 00:07:07.870 17:59:00 event.app_repeat -- common/autotest_common.sh@970 -- # wait 62721 00:07:08.830 spdk_app_start is called in Round 0. 00:07:08.830 Shutdown signal received, stop current app iteration 00:07:08.830 Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 reinitialization... 00:07:08.830 spdk_app_start is called in Round 1. 00:07:08.830 Shutdown signal received, stop current app iteration 00:07:08.830 Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 reinitialization... 00:07:08.830 spdk_app_start is called in Round 2. 00:07:08.830 Shutdown signal received, stop current app iteration 00:07:08.830 Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 reinitialization... 00:07:08.830 spdk_app_start is called in Round 3. 00:07:08.830 Shutdown signal received, stop current app iteration 00:07:09.089 ************************************ 00:07:09.089 END TEST app_repeat 00:07:09.089 ************************************ 00:07:09.089 17:59:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:09.089 17:59:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:09.089 00:07:09.089 real 0m21.403s 00:07:09.089 user 0m45.959s 00:07:09.089 sys 0m3.109s 00:07:09.089 17:59:01 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.089 17:59:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.089 17:59:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:09.089 17:59:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:09.089 17:59:01 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.089 17:59:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.089 17:59:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.089 ************************************ 00:07:09.089 START TEST cpu_locks 00:07:09.089 ************************************ 00:07:09.089 17:59:01 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:09.089 * Looking for test storage... 00:07:09.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:09.089 17:59:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:09.089 17:59:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:09.089 17:59:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:09.089 17:59:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:09.089 17:59:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.089 17:59:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.089 17:59:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.089 ************************************ 00:07:09.089 START TEST default_locks 00:07:09.089 ************************************ 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63184 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63184 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 63184 ']' 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.089 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.348 [2024-05-15 17:59:01.612829] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:09.348 [2024-05-15 17:59:01.612980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63184 ] 00:07:09.348 [2024-05-15 17:59:01.782464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.630 [2024-05-15 17:59:02.088910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.569 17:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.569 17:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:10.569 17:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63184 00:07:10.569 17:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63184 00:07:10.569 17:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63184 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 63184 ']' 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 63184 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63184 00:07:11.137 killing process with pid 63184 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63184' 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 63184 00:07:11.137 17:59:03 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 63184 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63184 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63184 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 63184 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 63184 ']' 00:07:13.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.669 ERROR: process (pid: 63184) is no longer running 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (63184) - No such process 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:13.669 00:07:13.669 real 0m4.136s 00:07:13.669 user 0m4.090s 00:07:13.669 sys 0m0.765s 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.669 17:59:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 ************************************ 00:07:13.669 END TEST default_locks 00:07:13.669 ************************************ 00:07:13.669 17:59:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:13.669 17:59:05 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:13.669 17:59:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.669 17:59:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 ************************************ 00:07:13.669 START TEST default_locks_via_rpc 00:07:13.669 ************************************ 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63259 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63259 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63259 ']' 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.669 17:59:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 [2024-05-15 17:59:05.814902] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:13.669 [2024-05-15 17:59:05.815313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63259 ] 00:07:13.669 [2024-05-15 17:59:05.985016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.928 [2024-05-15 17:59:06.213371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63259 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63259 00:07:14.864 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63259 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 63259 ']' 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 63259 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63259 00:07:15.123 killing process with pid 63259 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63259' 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 63259 00:07:15.123 17:59:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 63259 00:07:17.707 00:07:17.707 real 0m3.901s 00:07:17.707 user 0m3.843s 00:07:17.707 sys 0m0.741s 00:07:17.707 17:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.707 ************************************ 00:07:17.707 END TEST default_locks_via_rpc 00:07:17.707 ************************************ 00:07:17.707 17:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.707 17:59:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:17.707 17:59:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.707 17:59:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.707 17:59:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.708 ************************************ 00:07:17.708 START TEST non_locking_app_on_locked_coremask 00:07:17.708 ************************************ 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:17.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63329 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63329 /var/tmp/spdk.sock 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63329 ']' 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.708 17:59:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.708 [2024-05-15 17:59:09.747993] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:17.708 [2024-05-15 17:59:09.748149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63329 ] 00:07:17.708 [2024-05-15 17:59:09.911275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.708 [2024-05-15 17:59:10.175353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63349 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63349 /var/tmp/spdk2.sock 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63349 ']' 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.643 17:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.643 [2024-05-15 17:59:11.065625] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:18.643 [2024-05-15 17:59:11.065814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63349 ] 00:07:18.907 [2024-05-15 17:59:11.244943] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.907 [2024-05-15 17:59:11.245008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.514 [2024-05-15 17:59:11.719184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.418 17:59:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.418 17:59:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:21.418 17:59:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63329 00:07:21.418 17:59:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63329 00:07:21.418 17:59:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63329 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63329 ']' 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 63329 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63329 00:07:21.984 killing process with pid 63329 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63329' 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 63329 00:07:21.984 17:59:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 63329 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63349 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63349 ']' 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 63349 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63349 00:07:27.254 killing process with pid 63349 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63349' 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 63349 00:07:27.254 17:59:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 63349 00:07:28.633 ************************************ 00:07:28.633 END TEST non_locking_app_on_locked_coremask 00:07:28.633 ************************************ 00:07:28.633 00:07:28.633 real 0m11.357s 00:07:28.633 user 0m11.708s 00:07:28.633 sys 0m1.365s 00:07:28.633 17:59:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.633 17:59:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.633 17:59:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:28.633 17:59:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:28.633 17:59:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.633 17:59:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.633 ************************************ 00:07:28.633 START TEST locking_app_on_unlocked_coremask 00:07:28.633 ************************************ 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63493 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63493 /var/tmp/spdk.sock 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63493 ']' 00:07:28.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.633 17:59:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.892 [2024-05-15 17:59:21.159749] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:28.892 [2024-05-15 17:59:21.159907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63493 ] 00:07:28.892 [2024-05-15 17:59:21.323300] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.892 [2024-05-15 17:59:21.323404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.151 [2024-05-15 17:59:21.564838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:30.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63509 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63509 /var/tmp/spdk2.sock 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63509 ']' 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:30.089 17:59:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.089 [2024-05-15 17:59:22.446190] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:30.089 [2024-05-15 17:59:22.446348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63509 ] 00:07:30.347 [2024-05-15 17:59:22.622335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.611 [2024-05-15 17:59:23.094150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.139 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:33.139 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:33.139 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63509 00:07:33.139 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63509 00:07:33.139 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63493 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63493 ']' 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 63493 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63493 00:07:33.398 killing process with pid 63493 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63493' 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 63493 00:07:33.398 17:59:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 63493 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63509 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63509 ']' 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 63509 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63509 00:07:38.665 killing process with pid 63509 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63509' 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 63509 00:07:38.665 17:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 63509 00:07:40.042 ************************************ 00:07:40.042 END TEST locking_app_on_unlocked_coremask 00:07:40.042 ************************************ 00:07:40.042 00:07:40.042 real 0m11.435s 00:07:40.042 user 0m11.863s 00:07:40.042 sys 0m1.394s 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.042 17:59:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:40.042 17:59:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:40.042 17:59:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.042 17:59:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.042 ************************************ 00:07:40.042 START TEST locking_app_on_locked_coremask 00:07:40.042 ************************************ 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63660 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63660 /var/tmp/spdk.sock 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63660 ']' 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.042 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.301 17:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.301 [2024-05-15 17:59:32.705812] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:40.301 [2024-05-15 17:59:32.705997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63660 ] 00:07:40.566 [2024-05-15 17:59:32.877930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.826 [2024-05-15 17:59:33.117157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63682 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63682 /var/tmp/spdk2.sock 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63682 /var/tmp/spdk2.sock 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63682 /var/tmp/spdk2.sock 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63682 ']' 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.762 17:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.762 [2024-05-15 17:59:34.022080] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:41.762 [2024-05-15 17:59:34.022535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63682 ] 00:07:41.762 [2024-05-15 17:59:34.205686] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63660 has claimed it. 00:07:41.762 [2024-05-15 17:59:34.205763] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:42.329 ERROR: process (pid: 63682) is no longer running 00:07:42.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (63682) - No such process 00:07:42.329 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.329 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:42.329 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:42.329 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.329 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.329 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.329 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63660 00:07:42.330 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63660 00:07:42.330 17:59:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.894 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63660 00:07:42.894 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63660 ']' 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 63660 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63660 00:07:42.895 killing process with pid 63660 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63660' 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 63660 00:07:42.895 17:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 63660 00:07:45.426 ************************************ 00:07:45.426 END TEST locking_app_on_locked_coremask 00:07:45.426 ************************************ 00:07:45.426 00:07:45.426 real 0m4.786s 00:07:45.426 user 0m5.145s 00:07:45.426 sys 0m0.894s 00:07:45.426 17:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.426 17:59:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.426 17:59:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:45.426 17:59:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.426 17:59:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.426 17:59:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.426 ************************************ 00:07:45.426 START TEST locking_overlapped_coremask 00:07:45.426 ************************************ 00:07:45.426 17:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:45.426 17:59:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63750 00:07:45.426 17:59:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63750 /var/tmp/spdk.sock 00:07:45.426 17:59:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:45.426 17:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 63750 ']' 00:07:45.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.426 17:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.427 17:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:45.427 17:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.427 17:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:45.427 17:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.427 [2024-05-15 17:59:37.519657] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:45.427 [2024-05-15 17:59:37.519829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63750 ] 00:07:45.427 [2024-05-15 17:59:37.695533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.685 [2024-05-15 17:59:37.962669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.685 [2024-05-15 17:59:37.962760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.685 [2024-05-15 17:59:37.962776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63769 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63769 /var/tmp/spdk2.sock 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63769 /var/tmp/spdk2.sock 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63769 /var/tmp/spdk2.sock 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 63769 ']' 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.355 17:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.613 [2024-05-15 17:59:38.899003] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:46.613 [2024-05-15 17:59:38.899461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63769 ] 00:07:46.613 [2024-05-15 17:59:39.085903] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63750 has claimed it. 00:07:46.613 [2024-05-15 17:59:39.085991] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:47.181 ERROR: process (pid: 63769) is no longer running 00:07:47.181 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (63769) - No such process 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63750 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 63750 ']' 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 63750 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:47.181 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63750 00:07:47.182 killing process with pid 63750 00:07:47.182 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:47.182 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:47.182 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63750' 00:07:47.182 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 63750 00:07:47.182 17:59:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 63750 00:07:49.717 00:07:49.717 real 0m4.437s 00:07:49.717 user 0m11.490s 00:07:49.717 sys 0m0.662s 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.717 ************************************ 00:07:49.717 END TEST locking_overlapped_coremask 00:07:49.717 ************************************ 00:07:49.717 17:59:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:49.717 17:59:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.717 17:59:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.717 17:59:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.717 ************************************ 00:07:49.717 START TEST locking_overlapped_coremask_via_rpc 00:07:49.717 ************************************ 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63833 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63833 /var/tmp/spdk.sock 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63833 ']' 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.717 17:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.717 [2024-05-15 17:59:42.011387] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:49.717 [2024-05-15 17:59:42.011683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63833 ] 00:07:49.977 [2024-05-15 17:59:42.219644] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.977 [2024-05-15 17:59:42.219716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.236 [2024-05-15 17:59:42.517563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.236 [2024-05-15 17:59:42.517724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.236 [2024-05-15 17:59:42.517733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63857 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63857 /var/tmp/spdk2.sock 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63857 ']' 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.171 17:59:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.171 [2024-05-15 17:59:43.475815] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:51.171 [2024-05-15 17:59:43.476217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63857 ] 00:07:51.171 [2024-05-15 17:59:43.668145] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.171 [2024-05-15 17:59:43.668214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.737 [2024-05-15 17:59:44.172562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.737 [2024-05-15 17:59:44.176400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.737 [2024-05-15 17:59:44.176413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.271 [2024-05-15 17:59:46.267524] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63833 has claimed it. 00:07:54.271 request: 00:07:54.271 { 00:07:54.271 "method": "framework_enable_cpumask_locks", 00:07:54.271 "req_id": 1 00:07:54.271 } 00:07:54.271 Got JSON-RPC error response 00:07:54.271 response: 00:07:54.271 { 00:07:54.271 "code": -32603, 00:07:54.271 "message": "Failed to claim CPU core: 2" 00:07:54.271 } 00:07:54.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63833 /var/tmp/spdk.sock 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63833 ']' 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63857 /var/tmp/spdk2.sock 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63857 ']' 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.271 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:54.529 00:07:54.529 real 0m4.993s 00:07:54.529 user 0m1.762s 00:07:54.529 sys 0m0.234s 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.529 17:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.529 ************************************ 00:07:54.529 END TEST locking_overlapped_coremask_via_rpc 00:07:54.529 ************************************ 00:07:54.529 17:59:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:54.529 17:59:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63833 ]] 00:07:54.529 17:59:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63833 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63833 ']' 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63833 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63833 00:07:54.529 killing process with pid 63833 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63833' 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 63833 00:07:54.529 17:59:46 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 63833 00:07:57.058 17:59:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63857 ]] 00:07:57.058 17:59:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63857 00:07:57.058 17:59:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63857 ']' 00:07:57.058 17:59:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63857 00:07:57.058 17:59:49 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:57.058 17:59:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.058 17:59:49 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63857 00:07:57.058 killing process with pid 63857 00:07:57.058 17:59:49 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:57.059 17:59:49 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:57.059 17:59:49 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63857' 00:07:57.059 17:59:49 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 63857 00:07:57.059 17:59:49 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 63857 00:07:58.961 17:59:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:58.961 Process with pid 63833 is not found 00:07:58.961 17:59:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:58.961 17:59:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63833 ]] 00:07:58.961 17:59:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63833 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63833 ']' 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63833 00:07:58.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (63833) - No such process 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 63833 is not found' 00:07:58.961 17:59:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63857 ]] 00:07:58.961 17:59:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63857 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63857 ']' 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63857 00:07:58.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (63857) - No such process 00:07:58.961 Process with pid 63857 is not found 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 63857 is not found' 00:07:58.961 17:59:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:58.961 00:07:58.961 real 0m50.023s 00:07:58.961 user 1m25.018s 00:07:58.961 sys 0m7.329s 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.961 ************************************ 00:07:58.961 END TEST cpu_locks 00:07:58.961 ************************************ 00:07:58.961 17:59:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 ************************************ 00:07:59.219 END TEST event 00:07:59.219 ************************************ 00:07:59.219 00:07:59.219 real 1m21.507s 00:07:59.219 user 2m25.969s 00:07:59.219 sys 0m11.554s 00:07:59.219 17:59:51 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.219 17:59:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 17:59:51 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:59.219 17:59:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.219 17:59:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.219 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 ************************************ 00:07:59.219 START TEST thread 00:07:59.219 ************************************ 00:07:59.219 17:59:51 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:59.219 * Looking for test storage... 00:07:59.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:59.219 17:59:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:59.219 17:59:51 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:59.219 17:59:51 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.219 17:59:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.219 ************************************ 00:07:59.219 START TEST thread_poller_perf 00:07:59.219 ************************************ 00:07:59.219 17:59:51 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:59.219 [2024-05-15 17:59:51.667406] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:59.219 [2024-05-15 17:59:51.667577] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64044 ] 00:07:59.479 [2024-05-15 17:59:51.838638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.737 [2024-05-15 17:59:52.124230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.737 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:01.146 ====================================== 00:08:01.146 busy:2208401976 (cyc) 00:08:01.146 total_run_count: 300000 00:08:01.146 tsc_hz: 2200000000 (cyc) 00:08:01.146 ====================================== 00:08:01.146 poller_cost: 7361 (cyc), 3345 (nsec) 00:08:01.146 00:08:01.146 real 0m1.885s 00:08:01.146 user 0m1.649s 00:08:01.146 sys 0m0.123s 00:08:01.146 17:59:53 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.146 17:59:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.146 ************************************ 00:08:01.146 END TEST thread_poller_perf 00:08:01.146 ************************************ 00:08:01.146 17:59:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:01.146 17:59:53 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:01.146 17:59:53 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.146 17:59:53 thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.146 ************************************ 00:08:01.146 START TEST thread_poller_perf 00:08:01.146 ************************************ 00:08:01.146 17:59:53 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:01.146 [2024-05-15 17:59:53.607313] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:01.146 [2024-05-15 17:59:53.607668] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64086 ] 00:08:01.404 [2024-05-15 17:59:53.778734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.662 [2024-05-15 17:59:54.017469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.662 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:03.041 ====================================== 00:08:03.041 busy:2204669367 (cyc) 00:08:03.041 total_run_count: 3852000 00:08:03.041 tsc_hz: 2200000000 (cyc) 00:08:03.041 ====================================== 00:08:03.041 poller_cost: 572 (cyc), 260 (nsec) 00:08:03.041 00:08:03.041 real 0m1.827s 00:08:03.041 user 0m1.598s 00:08:03.041 sys 0m0.118s 00:08:03.041 ************************************ 00:08:03.041 END TEST thread_poller_perf 00:08:03.041 ************************************ 00:08:03.041 17:59:55 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.041 17:59:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 17:59:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:03.041 ************************************ 00:08:03.041 END TEST thread 00:08:03.041 ************************************ 00:08:03.041 00:08:03.041 real 0m3.907s 00:08:03.041 user 0m3.311s 00:08:03.041 sys 0m0.363s 00:08:03.041 17:59:55 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.041 17:59:55 thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 17:59:55 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:03.041 17:59:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:03.041 17:59:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.041 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 ************************************ 00:08:03.041 START TEST accel 00:08:03.041 ************************************ 00:08:03.041 17:59:55 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:03.300 * Looking for test storage... 00:08:03.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:03.300 17:59:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:03.300 17:59:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:03.300 17:59:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:03.300 17:59:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=64167 00:08:03.300 17:59:55 accel -- accel/accel.sh@63 -- # waitforlisten 64167 00:08:03.300 17:59:55 accel -- common/autotest_common.sh@827 -- # '[' -z 64167 ']' 00:08:03.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.300 17:59:55 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.301 17:59:55 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:03.301 17:59:55 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:03.301 17:59:55 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.301 17:59:55 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:03.301 17:59:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:03.301 17:59:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.301 17:59:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.301 17:59:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.301 17:59:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.301 17:59:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.301 17:59:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.301 17:59:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:03.301 17:59:55 accel -- accel/accel.sh@41 -- # jq -r . 00:08:03.301 [2024-05-15 17:59:55.694793] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:03.301 [2024-05-15 17:59:55.695000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64167 ] 00:08:03.559 [2024-05-15 17:59:55.868734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.818 [2024-05-15 17:59:56.127552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@860 -- # return 0 00:08:04.753 17:59:56 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:04.753 17:59:56 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:04.753 17:59:56 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:04.753 17:59:56 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:04.753 17:59:56 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:04.753 17:59:56 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.753 17:59:56 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:04.753 17:59:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:04.753 17:59:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:04.753 17:59:56 accel -- accel/accel.sh@75 -- # killprocess 64167 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@946 -- # '[' -z 64167 ']' 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@950 -- # kill -0 64167 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@951 -- # uname 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64167 00:08:04.753 17:59:56 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:04.753 killing process with pid 64167 00:08:04.753 17:59:57 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:04.753 17:59:57 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64167' 00:08:04.753 17:59:57 accel -- common/autotest_common.sh@965 -- # kill 64167 00:08:04.753 17:59:57 accel -- common/autotest_common.sh@970 -- # wait 64167 00:08:06.656 17:59:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:06.656 17:59:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:06.656 17:59:59 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:06.656 17:59:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.656 17:59:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.915 17:59:59 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:06.915 17:59:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:06.915 17:59:59 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.915 17:59:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:06.915 17:59:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:06.916 17:59:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:06.916 17:59:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.916 17:59:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.916 ************************************ 00:08:06.916 START TEST accel_missing_filename 00:08:06.916 ************************************ 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.916 17:59:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:06.916 17:59:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:06.916 [2024-05-15 17:59:59.358541] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:06.916 [2024-05-15 17:59:59.358764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64237 ] 00:08:07.174 [2024-05-15 17:59:59.529897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.433 [2024-05-15 17:59:59.789910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.691 [2024-05-15 17:59:59.985562] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.267 [2024-05-15 18:00:00.468002] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:08:08.546 A filename is required. 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.546 00:08:08.546 real 0m1.565s 00:08:08.546 user 0m1.319s 00:08:08.546 sys 0m0.211s 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.546 ************************************ 00:08:08.546 END TEST accel_missing_filename 00:08:08.546 ************************************ 00:08:08.546 18:00:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 18:00:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.546 18:00:00 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:08.546 18:00:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.546 18:00:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.546 ************************************ 00:08:08.546 START TEST accel_compress_verify 00:08:08.546 ************************************ 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.546 18:00:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:08.546 18:00:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:08.546 [2024-05-15 18:00:00.948150] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:08.546 [2024-05-15 18:00:00.948344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64279 ] 00:08:08.805 [2024-05-15 18:00:01.125649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.062 [2024-05-15 18:00:01.370340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.319 [2024-05-15 18:00:01.576388] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.575 [2024-05-15 18:00:02.069029] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:08:10.137 00:08:10.137 Compression does not support the verify option, aborting. 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.137 00:08:10.137 real 0m1.548s 00:08:10.137 user 0m1.277s 00:08:10.137 sys 0m0.203s 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.137 ************************************ 00:08:10.137 END TEST accel_compress_verify 00:08:10.137 ************************************ 00:08:10.137 18:00:02 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:10.137 18:00:02 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:10.137 18:00:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:10.137 18:00:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.137 18:00:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.137 ************************************ 00:08:10.137 START TEST accel_wrong_workload 00:08:10.137 ************************************ 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:10.137 18:00:02 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:10.137 Unsupported workload type: foobar 00:08:10.137 [2024-05-15 18:00:02.541869] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:10.137 accel_perf options: 00:08:10.137 [-h help message] 00:08:10.137 [-q queue depth per core] 00:08:10.137 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:10.137 [-T number of threads per core 00:08:10.137 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:10.137 [-t time in seconds] 00:08:10.137 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:10.137 [ dif_verify, , dif_generate, dif_generate_copy 00:08:10.137 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:10.137 [-l for compress/decompress workloads, name of uncompressed input file 00:08:10.137 [-S for crc32c workload, use this seed value (default 0) 00:08:10.137 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:10.137 [-f for fill workload, use this BYTE value (default 255) 00:08:10.137 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:10.137 [-y verify result if this switch is on] 00:08:10.137 [-a tasks to allocate per core (default: same value as -q)] 00:08:10.137 Can be used to spread operations across a wider range of memory. 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:10.137 ************************************ 00:08:10.137 END TEST accel_wrong_workload 00:08:10.137 ************************************ 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.137 00:08:10.137 real 0m0.079s 00:08:10.137 user 0m0.078s 00:08:10.137 sys 0m0.043s 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.137 18:00:02 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:10.137 18:00:02 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:10.137 18:00:02 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:10.137 18:00:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.137 18:00:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.137 ************************************ 00:08:10.137 START TEST accel_negative_buffers 00:08:10.137 ************************************ 00:08:10.137 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:10.138 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:10.138 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:10.138 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:10.138 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.138 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:10.138 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.138 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:10.138 18:00:02 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:10.396 -x option must be non-negative. 00:08:10.396 [2024-05-15 18:00:02.665983] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:10.396 accel_perf options: 00:08:10.396 [-h help message] 00:08:10.396 [-q queue depth per core] 00:08:10.396 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:10.396 [-T number of threads per core 00:08:10.396 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:10.396 [-t time in seconds] 00:08:10.396 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:10.396 [ dif_verify, , dif_generate, dif_generate_copy 00:08:10.396 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:10.396 [-l for compress/decompress workloads, name of uncompressed input file 00:08:10.396 [-S for crc32c workload, use this seed value (default 0) 00:08:10.396 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:10.396 [-f for fill workload, use this BYTE value (default 255) 00:08:10.396 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:10.396 [-y verify result if this switch is on] 00:08:10.396 [-a tasks to allocate per core (default: same value as -q)] 00:08:10.396 Can be used to spread operations across a wider range of memory. 00:08:10.396 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:10.396 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.396 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:10.396 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.396 00:08:10.396 real 0m0.075s 00:08:10.396 user 0m0.089s 00:08:10.396 sys 0m0.035s 00:08:10.396 ************************************ 00:08:10.396 END TEST accel_negative_buffers 00:08:10.396 ************************************ 00:08:10.396 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.396 18:00:02 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:10.396 18:00:02 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:10.396 18:00:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:10.396 18:00:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.396 18:00:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.396 ************************************ 00:08:10.396 START TEST accel_crc32c 00:08:10.396 ************************************ 00:08:10.396 18:00:02 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:10.396 18:00:02 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:10.396 [2024-05-15 18:00:02.789306] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:10.396 [2024-05-15 18:00:02.789466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64352 ] 00:08:10.653 [2024-05-15 18:00:02.965397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.911 [2024-05-15 18:00:03.199477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.911 18:00:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.170 18:00:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.170 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.170 18:00:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:13.122 18:00:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.122 00:08:13.122 real 0m2.523s 00:08:13.122 user 0m2.227s 00:08:13.122 sys 0m0.192s 00:08:13.122 ************************************ 00:08:13.122 END TEST accel_crc32c 00:08:13.122 ************************************ 00:08:13.122 18:00:05 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.122 18:00:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:13.122 18:00:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:13.122 18:00:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:13.122 18:00:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.122 18:00:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.122 ************************************ 00:08:13.122 START TEST accel_crc32c_C2 00:08:13.122 ************************************ 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.122 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.123 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:13.123 [2024-05-15 18:00:05.350627] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:13.123 [2024-05-15 18:00:05.350787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64398 ] 00:08:13.123 [2024-05-15 18:00:05.522235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.380 [2024-05-15 18:00:05.757548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.638 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.638 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.638 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.639 18:00:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.537 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.538 00:08:15.538 real 0m2.562s 00:08:15.538 user 0m2.264s 00:08:15.538 sys 0m0.199s 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.538 18:00:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:15.538 ************************************ 00:08:15.538 END TEST accel_crc32c_C2 00:08:15.538 ************************************ 00:08:15.538 18:00:07 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:15.538 18:00:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:15.538 18:00:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.538 18:00:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.538 ************************************ 00:08:15.538 START TEST accel_copy 00:08:15.538 ************************************ 00:08:15.538 18:00:07 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:15.538 18:00:07 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:15.538 [2024-05-15 18:00:07.953339] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:15.538 [2024-05-15 18:00:07.953473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64445 ] 00:08:15.797 [2024-05-15 18:00:08.116425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.055 [2024-05-15 18:00:08.356758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.313 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.314 18:00:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.314 18:00:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.314 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.314 18:00:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.265 18:00:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.265 18:00:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.265 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.265 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.265 18:00:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.265 18:00:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.265 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:18.266 18:00:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.266 00:08:18.266 real 0m2.536s 00:08:18.266 user 0m0.015s 00:08:18.266 sys 0m0.003s 00:08:18.266 18:00:10 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.266 ************************************ 00:08:18.266 END TEST accel_copy 00:08:18.266 ************************************ 00:08:18.266 18:00:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:18.266 18:00:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:18.266 18:00:10 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:18.266 18:00:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.266 18:00:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.266 ************************************ 00:08:18.266 START TEST accel_fill 00:08:18.266 ************************************ 00:08:18.266 18:00:10 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:18.266 18:00:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:18.266 [2024-05-15 18:00:10.551047] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:18.266 [2024-05-15 18:00:10.551206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64491 ] 00:08:18.266 [2024-05-15 18:00:10.720887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.525 [2024-05-15 18:00:10.966617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.785 18:00:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:20.687 ************************************ 00:08:20.687 END TEST accel_fill 00:08:20.687 ************************************ 00:08:20.687 18:00:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.687 00:08:20.687 real 0m2.564s 00:08:20.687 user 0m2.273s 00:08:20.687 sys 0m0.194s 00:08:20.687 18:00:13 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.687 18:00:13 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:20.687 18:00:13 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:20.687 18:00:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:20.687 18:00:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.687 18:00:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.687 ************************************ 00:08:20.687 START TEST accel_copy_crc32c 00:08:20.687 ************************************ 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:20.687 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:20.687 [2024-05-15 18:00:13.167650] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:20.687 [2024-05-15 18:00:13.167829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64538 ] 00:08:20.959 [2024-05-15 18:00:13.346705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.219 [2024-05-15 18:00:13.644777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.479 18:00:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.382 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.383 00:08:23.383 real 0m2.630s 00:08:23.383 user 0m2.317s 00:08:23.383 sys 0m0.217s 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.383 18:00:15 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:23.383 ************************************ 00:08:23.383 END TEST accel_copy_crc32c 00:08:23.383 ************************************ 00:08:23.383 18:00:15 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:23.383 18:00:15 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:23.383 18:00:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.383 18:00:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.383 ************************************ 00:08:23.383 START TEST accel_copy_crc32c_C2 00:08:23.383 ************************************ 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:23.383 18:00:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:23.383 [2024-05-15 18:00:15.845223] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:23.383 [2024-05-15 18:00:15.845412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64590 ] 00:08:23.644 [2024-05-15 18:00:16.020653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.906 [2024-05-15 18:00:16.261920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.165 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:24.166 18:00:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.066 00:08:26.066 real 0m2.554s 00:08:26.066 user 0m2.247s 00:08:26.066 sys 0m0.210s 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.066 ************************************ 00:08:26.066 END TEST accel_copy_crc32c_C2 00:08:26.066 ************************************ 00:08:26.066 18:00:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:26.066 18:00:18 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:26.066 18:00:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:26.066 18:00:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.066 18:00:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.066 ************************************ 00:08:26.066 START TEST accel_dualcast 00:08:26.066 ************************************ 00:08:26.066 18:00:18 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:26.066 18:00:18 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:26.066 [2024-05-15 18:00:18.449465] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:26.066 [2024-05-15 18:00:18.449645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64631 ] 00:08:26.324 [2024-05-15 18:00:18.625496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.583 [2024-05-15 18:00:18.869848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:26.583 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:26.584 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.854 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:26.855 18:00:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:28.806 18:00:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.806 00:08:28.806 real 0m2.573s 00:08:28.806 user 0m0.014s 00:08:28.806 sys 0m0.005s 00:08:28.806 18:00:20 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.806 18:00:20 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:28.806 ************************************ 00:08:28.806 END TEST accel_dualcast 00:08:28.806 ************************************ 00:08:28.806 18:00:21 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:28.806 18:00:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:28.806 18:00:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.806 18:00:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.806 ************************************ 00:08:28.806 START TEST accel_compare 00:08:28.806 ************************************ 00:08:28.806 18:00:21 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:28.806 18:00:21 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:28.806 [2024-05-15 18:00:21.061507] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:28.806 [2024-05-15 18:00:21.061650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64683 ] 00:08:28.806 [2024-05-15 18:00:21.226677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.064 [2024-05-15 18:00:21.509105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.334 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.335 18:00:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:31.247 18:00:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.247 00:08:31.247 real 0m2.551s 00:08:31.247 user 0m2.276s 00:08:31.247 sys 0m0.179s 00:08:31.247 18:00:23 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.247 ************************************ 00:08:31.247 END TEST accel_compare 00:08:31.247 ************************************ 00:08:31.247 18:00:23 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:31.247 18:00:23 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:31.247 18:00:23 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:31.247 18:00:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.247 18:00:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.247 ************************************ 00:08:31.247 START TEST accel_xor 00:08:31.247 ************************************ 00:08:31.247 18:00:23 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:31.247 18:00:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:31.247 [2024-05-15 18:00:23.658920] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:31.247 [2024-05-15 18:00:23.659056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64724 ] 00:08:31.506 [2024-05-15 18:00:23.820505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.763 [2024-05-15 18:00:24.052107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.763 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:31.764 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.021 18:00:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.921 00:08:33.921 real 0m2.520s 00:08:33.921 user 0m2.229s 00:08:33.921 sys 0m0.195s 00:08:33.921 ************************************ 00:08:33.921 END TEST accel_xor 00:08:33.921 ************************************ 00:08:33.921 18:00:26 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.921 18:00:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:33.921 18:00:26 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:33.921 18:00:26 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:33.921 18:00:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.921 18:00:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.921 ************************************ 00:08:33.921 START TEST accel_xor 00:08:33.921 ************************************ 00:08:33.921 18:00:26 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:33.921 18:00:26 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:33.921 [2024-05-15 18:00:26.235829] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:33.921 [2024-05-15 18:00:26.236037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64775 ] 00:08:34.179 [2024-05-15 18:00:26.436783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.179 [2024-05-15 18:00:26.654072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.438 18:00:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:36.356 ************************************ 00:08:36.356 END TEST accel_xor 00:08:36.356 ************************************ 00:08:36.356 18:00:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.356 00:08:36.356 real 0m2.532s 00:08:36.356 user 0m2.217s 00:08:36.356 sys 0m0.217s 00:08:36.356 18:00:28 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.356 18:00:28 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:36.356 18:00:28 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:36.356 18:00:28 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:36.356 18:00:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.357 18:00:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 ************************************ 00:08:36.357 START TEST accel_dif_verify 00:08:36.357 ************************************ 00:08:36.357 18:00:28 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:36.357 18:00:28 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:36.357 [2024-05-15 18:00:28.813980] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:36.357 [2024-05-15 18:00:28.814172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64817 ] 00:08:36.620 [2024-05-15 18:00:28.991472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.881 [2024-05-15 18:00:29.225286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:37.139 18:00:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:39.037 18:00:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.037 00:08:39.037 real 0m2.494s 00:08:39.037 user 0m2.180s 00:08:39.037 sys 0m0.219s 00:08:39.037 18:00:31 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:39.037 ************************************ 00:08:39.037 END TEST accel_dif_verify 00:08:39.037 ************************************ 00:08:39.037 18:00:31 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:39.037 18:00:31 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:39.037 18:00:31 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:39.037 18:00:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:39.037 18:00:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.037 ************************************ 00:08:39.037 START TEST accel_dif_generate 00:08:39.037 ************************************ 00:08:39.037 18:00:31 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.037 18:00:31 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:39.038 18:00:31 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:39.038 [2024-05-15 18:00:31.370566] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:39.038 [2024-05-15 18:00:31.370753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64864 ] 00:08:39.295 [2024-05-15 18:00:31.545065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.553 [2024-05-15 18:00:31.806161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.553 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:39.554 18:00:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:41.449 18:00:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:41.449 00:08:41.449 real 0m2.595s 00:08:41.449 user 0m0.019s 00:08:41.449 sys 0m0.003s 00:08:41.449 18:00:33 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.449 ************************************ 00:08:41.449 END TEST accel_dif_generate 00:08:41.449 ************************************ 00:08:41.449 18:00:33 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:41.707 18:00:33 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:41.707 18:00:33 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:41.707 18:00:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:41.707 18:00:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.707 ************************************ 00:08:41.707 START TEST accel_dif_generate_copy 00:08:41.707 ************************************ 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:41.707 18:00:33 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:41.707 [2024-05-15 18:00:34.015919] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:41.707 [2024-05-15 18:00:34.016104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64910 ] 00:08:41.707 [2024-05-15 18:00:34.192507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.965 [2024-05-15 18:00:34.444680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:42.223 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:42.224 18:00:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.120 00:08:44.120 real 0m2.498s 00:08:44.120 user 0m2.189s 00:08:44.120 sys 0m0.213s 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:44.120 ************************************ 00:08:44.120 END TEST accel_dif_generate_copy 00:08:44.120 ************************************ 00:08:44.120 18:00:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 18:00:36 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:44.120 18:00:36 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:44.120 18:00:36 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:44.120 18:00:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:44.120 18:00:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.120 ************************************ 00:08:44.120 START TEST accel_comp 00:08:44.120 ************************************ 00:08:44.120 18:00:36 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:44.120 18:00:36 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:44.120 [2024-05-15 18:00:36.556644] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:44.120 [2024-05-15 18:00:36.556776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64957 ] 00:08:44.378 [2024-05-15 18:00:36.713467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.635 [2024-05-15 18:00:36.975939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.892 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.892 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.892 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:44.893 18:00:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:46.851 ************************************ 00:08:46.851 END TEST accel_comp 00:08:46.851 ************************************ 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:46.851 18:00:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:46.851 00:08:46.851 real 0m2.513s 00:08:46.851 user 0m2.215s 00:08:46.851 sys 0m0.202s 00:08:46.851 18:00:39 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:46.851 18:00:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:46.851 18:00:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:46.851 18:00:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:46.851 18:00:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:46.851 18:00:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:46.851 ************************************ 00:08:46.851 START TEST accel_decomp 00:08:46.851 ************************************ 00:08:46.851 18:00:39 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:46.851 18:00:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:46.851 [2024-05-15 18:00:39.122567] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:46.851 [2024-05-15 18:00:39.122723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65003 ] 00:08:46.851 [2024-05-15 18:00:39.295633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.109 [2024-05-15 18:00:39.544826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:47.368 18:00:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:49.265 18:00:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.265 00:08:49.265 real 0m2.531s 00:08:49.265 user 0m2.241s 00:08:49.265 sys 0m0.194s 00:08:49.265 18:00:41 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:49.265 18:00:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:49.265 ************************************ 00:08:49.265 END TEST accel_decomp 00:08:49.265 ************************************ 00:08:49.265 18:00:41 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:49.265 18:00:41 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:49.265 18:00:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:49.265 18:00:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.265 ************************************ 00:08:49.265 START TEST accel_decmop_full 00:08:49.265 ************************************ 00:08:49.265 18:00:41 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:49.265 18:00:41 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:49.266 [2024-05-15 18:00:41.699837] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:49.266 [2024-05-15 18:00:41.699985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65050 ] 00:08:49.524 [2024-05-15 18:00:41.861138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.782 [2024-05-15 18:00:42.082239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:49.782 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:50.041 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.041 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.041 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.041 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:50.041 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.041 18:00:42 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:50.042 18:00:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.961 18:00:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:51.961 18:00:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.961 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.961 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.961 18:00:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:51.962 18:00:44 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:51.962 00:08:51.962 real 0m2.539s 00:08:51.962 user 0m2.244s 00:08:51.962 sys 0m0.198s 00:08:51.962 ************************************ 00:08:51.962 END TEST accel_decmop_full 00:08:51.962 ************************************ 00:08:51.962 18:00:44 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:51.962 18:00:44 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:51.962 18:00:44 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.962 18:00:44 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:51.962 18:00:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:51.962 18:00:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:51.962 ************************************ 00:08:51.962 START TEST accel_decomp_mcore 00:08:51.962 ************************************ 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:51.962 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:51.962 [2024-05-15 18:00:44.294713] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:51.962 [2024-05-15 18:00:44.295039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65097 ] 00:08:52.220 [2024-05-15 18:00:44.471407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.480 [2024-05-15 18:00:44.728542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.480 [2024-05-15 18:00:44.728602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.480 [2024-05-15 18:00:44.728708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.480 [2024-05-15 18:00:44.728722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:52.480 18:00:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:54.381 ************************************ 00:08:54.381 END TEST accel_decomp_mcore 00:08:54.381 ************************************ 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.381 00:08:54.381 real 0m2.564s 00:08:54.381 user 0m7.358s 00:08:54.381 sys 0m0.239s 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:54.381 18:00:46 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:54.381 18:00:46 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.381 18:00:46 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:54.381 18:00:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:54.381 18:00:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:54.381 ************************************ 00:08:54.381 START TEST accel_decomp_full_mcore 00:08:54.381 ************************************ 00:08:54.381 18:00:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.381 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:54.382 18:00:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:54.640 [2024-05-15 18:00:46.898587] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:54.640 [2024-05-15 18:00:46.898732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65146 ] 00:08:54.640 [2024-05-15 18:00:47.058556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.899 [2024-05-15 18:00:47.294845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.899 [2024-05-15 18:00:47.295022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.899 [2024-05-15 18:00:47.295138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.899 [2024-05-15 18:00:47.295321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:55.157 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.158 18:00:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:57.061 00:08:57.061 real 0m2.540s 00:08:57.061 user 0m0.010s 00:08:57.061 sys 0m0.005s 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:57.061 18:00:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:57.061 ************************************ 00:08:57.061 END TEST accel_decomp_full_mcore 00:08:57.061 ************************************ 00:08:57.061 18:00:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:57.061 18:00:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:57.061 18:00:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:57.061 18:00:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:57.061 ************************************ 00:08:57.061 START TEST accel_decomp_mthread 00:08:57.061 ************************************ 00:08:57.061 18:00:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:57.062 18:00:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:57.062 [2024-05-15 18:00:49.490847] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:57.062 [2024-05-15 18:00:49.490981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65196 ] 00:08:57.322 [2024-05-15 18:00:49.654663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.580 [2024-05-15 18:00:49.889278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:57.838 18:00:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.754 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:59.755 ************************************ 00:08:59.755 END TEST accel_decomp_mthread 00:08:59.755 ************************************ 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:59.755 00:08:59.755 real 0m2.497s 00:08:59.755 user 0m2.206s 00:08:59.755 sys 0m0.197s 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:59.755 18:00:51 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:59.755 18:00:51 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.755 18:00:51 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:59.755 18:00:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.755 18:00:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.755 ************************************ 00:08:59.755 START TEST accel_decomp_full_mthread 00:08:59.755 ************************************ 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.755 18:00:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:59.755 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.755 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.755 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.755 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.755 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.755 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:59.755 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:59.755 [2024-05-15 18:00:52.053711] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:59.755 [2024-05-15 18:00:52.053874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65242 ] 00:08:59.755 [2024-05-15 18:00:52.227854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.018 [2024-05-15 18:00:52.468754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.282 18:00:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.219 ************************************ 00:09:02.219 END TEST accel_decomp_full_mthread 00:09:02.219 ************************************ 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.219 00:09:02.219 real 0m2.559s 00:09:02.219 user 0m2.262s 00:09:02.219 sys 0m0.202s 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:02.219 18:00:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:02.219 18:00:54 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:02.219 18:00:54 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:02.219 18:00:54 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:02.219 18:00:54 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:02.219 18:00:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:02.219 18:00:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:02.219 18:00:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:02.219 18:00:54 accel -- common/autotest_common.sh@10 -- # set +x 00:09:02.219 18:00:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.219 18:00:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.219 18:00:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:02.219 18:00:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:02.219 18:00:54 accel -- accel/accel.sh@41 -- # jq -r . 00:09:02.219 ************************************ 00:09:02.219 START TEST accel_dif_functional_tests 00:09:02.219 ************************************ 00:09:02.219 18:00:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:02.219 [2024-05-15 18:00:54.719366] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:02.478 [2024-05-15 18:00:54.719805] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65290 ] 00:09:02.478 [2024-05-15 18:00:54.895514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:02.735 [2024-05-15 18:00:55.129476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.735 [2024-05-15 18:00:55.129589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.735 [2024-05-15 18:00:55.129604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.994 00:09:02.994 00:09:02.994 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.994 http://cunit.sourceforge.net/ 00:09:02.994 00:09:02.994 00:09:02.994 Suite: accel_dif 00:09:02.994 Test: verify: DIF generated, GUARD check ...passed 00:09:02.994 Test: verify: DIF generated, APPTAG check ...passed 00:09:02.994 Test: verify: DIF generated, REFTAG check ...passed 00:09:02.994 Test: verify: DIF not generated, GUARD check ...passed 00:09:02.994 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 18:00:55.438912] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:02.994 [2024-05-15 18:00:55.439002] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:02.994 [2024-05-15 18:00:55.439082] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:02.994 [2024-05-15 18:00:55.439202] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:02.994 passed 00:09:02.994 Test: verify: DIF not generated, REFTAG check ...passed 00:09:02.994 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:02.995 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 18:00:55.439287] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:02.995 [2024-05-15 18:00:55.439530] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:02.995 [2024-05-15 18:00:55.439640] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:02.995 passed 00:09:02.995 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:02.995 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:02.995 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:02.995 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:02.995 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 18:00:55.440041] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:02.995 passed 00:09:02.995 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:02.995 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:02.995 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:02.995 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:02.995 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:02.995 Test: generate copy: iovecs-len validate ...passed 00:09:02.995 Test: generate copy: buffer alignment validate ...[2024-05-15 18:00:55.440737] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:02.995 passed 00:09:02.995 00:09:02.995 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.995 suites 1 1 n/a 0 0 00:09:02.995 tests 20 20 20 0 0 00:09:02.995 asserts 204 204 204 0 n/a 00:09:02.995 00:09:02.995 Elapsed time = 0.005 seconds 00:09:04.374 ************************************ 00:09:04.374 END TEST accel_dif_functional_tests 00:09:04.374 ************************************ 00:09:04.374 00:09:04.374 real 0m1.941s 00:09:04.374 user 0m3.708s 00:09:04.374 sys 0m0.257s 00:09:04.374 18:00:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:04.374 18:00:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:04.374 00:09:04.374 real 1m1.122s 00:09:04.374 user 1m5.791s 00:09:04.374 sys 0m6.152s 00:09:04.374 18:00:56 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:04.374 18:00:56 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.374 ************************************ 00:09:04.374 END TEST accel 00:09:04.374 ************************************ 00:09:04.374 18:00:56 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:04.374 18:00:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:04.374 18:00:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:04.374 18:00:56 -- common/autotest_common.sh@10 -- # set +x 00:09:04.374 ************************************ 00:09:04.374 START TEST accel_rpc 00:09:04.374 ************************************ 00:09:04.374 18:00:56 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:04.374 * Looking for test storage... 00:09:04.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:04.374 18:00:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:04.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.374 18:00:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65372 00:09:04.374 18:00:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 65372 00:09:04.374 18:00:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:04.374 18:00:56 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 65372 ']' 00:09:04.374 18:00:56 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.374 18:00:56 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:04.374 18:00:56 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.374 18:00:56 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:04.374 18:00:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.374 [2024-05-15 18:00:56.850475] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:04.375 [2024-05-15 18:00:56.850661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65372 ] 00:09:04.644 [2024-05-15 18:00:57.022785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.908 [2024-05-15 18:00:57.248461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.475 18:00:57 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:05.475 18:00:57 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:05.475 18:00:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:05.475 18:00:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:05.475 18:00:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:05.475 18:00:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:05.475 18:00:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:05.475 18:00:57 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:05.475 18:00:57 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:05.475 18:00:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.475 ************************************ 00:09:05.475 START TEST accel_assign_opcode 00:09:05.475 ************************************ 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:05.475 [2024-05-15 18:00:57.737450] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:05.475 [2024-05-15 18:00:57.745415] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.475 18:00:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:06.041 18:00:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.041 18:00:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:06.041 18:00:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:06.041 18:00:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:06.041 18:00:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.041 18:00:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:06.041 18:00:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.298 software 00:09:06.298 00:09:06.298 real 0m0.812s 00:09:06.298 user 0m0.054s 00:09:06.298 sys 0m0.010s 00:09:06.298 18:00:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:06.298 18:00:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:06.298 ************************************ 00:09:06.298 END TEST accel_assign_opcode 00:09:06.298 ************************************ 00:09:06.298 18:00:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 65372 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 65372 ']' 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 65372 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65372 00:09:06.298 killing process with pid 65372 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65372' 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@965 -- # kill 65372 00:09:06.298 18:00:58 accel_rpc -- common/autotest_common.sh@970 -- # wait 65372 00:09:08.827 ************************************ 00:09:08.827 END TEST accel_rpc 00:09:08.827 ************************************ 00:09:08.827 00:09:08.827 real 0m4.130s 00:09:08.827 user 0m4.020s 00:09:08.827 sys 0m0.591s 00:09:08.827 18:01:00 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:08.827 18:01:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.827 18:01:00 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:08.827 18:01:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:08.827 18:01:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:08.827 18:01:00 -- common/autotest_common.sh@10 -- # set +x 00:09:08.827 ************************************ 00:09:08.827 START TEST app_cmdline 00:09:08.827 ************************************ 00:09:08.827 18:01:00 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:08.827 * Looking for test storage... 00:09:08.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:08.827 18:01:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:08.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.827 18:01:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=65489 00:09:08.827 18:01:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 65489 00:09:08.827 18:01:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:08.827 18:01:00 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 65489 ']' 00:09:08.828 18:01:00 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.828 18:01:00 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:08.828 18:01:00 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.828 18:01:00 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:08.828 18:01:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:08.828 [2024-05-15 18:01:01.044829] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:08.828 [2024-05-15 18:01:01.044998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65489 ] 00:09:08.828 [2024-05-15 18:01:01.221790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.085 [2024-05-15 18:01:01.476750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.018 18:01:02 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:10.019 18:01:02 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:09:10.019 18:01:02 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:10.276 { 00:09:10.276 "version": "SPDK v24.05-pre git sha1 40b11d962", 00:09:10.276 "fields": { 00:09:10.276 "major": 24, 00:09:10.276 "minor": 5, 00:09:10.276 "patch": 0, 00:09:10.276 "suffix": "-pre", 00:09:10.276 "commit": "40b11d962" 00:09:10.276 } 00:09:10.276 } 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:10.276 18:01:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:10.276 18:01:02 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:10.541 request: 00:09:10.541 { 00:09:10.541 "method": "env_dpdk_get_mem_stats", 00:09:10.541 "req_id": 1 00:09:10.541 } 00:09:10.541 Got JSON-RPC error response 00:09:10.541 response: 00:09:10.541 { 00:09:10.541 "code": -32601, 00:09:10.541 "message": "Method not found" 00:09:10.541 } 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.541 18:01:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 65489 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 65489 ']' 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 65489 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65489 00:09:10.541 killing process with pid 65489 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65489' 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@965 -- # kill 65489 00:09:10.541 18:01:02 app_cmdline -- common/autotest_common.sh@970 -- # wait 65489 00:09:13.073 00:09:13.073 real 0m4.422s 00:09:13.073 user 0m4.798s 00:09:13.073 sys 0m0.630s 00:09:13.073 18:01:05 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:13.073 18:01:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:13.073 ************************************ 00:09:13.073 END TEST app_cmdline 00:09:13.073 ************************************ 00:09:13.073 18:01:05 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:13.073 18:01:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:13.073 18:01:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:13.073 18:01:05 -- common/autotest_common.sh@10 -- # set +x 00:09:13.073 ************************************ 00:09:13.073 START TEST version 00:09:13.073 ************************************ 00:09:13.073 18:01:05 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:13.073 * Looking for test storage... 00:09:13.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:13.073 18:01:05 version -- app/version.sh@17 -- # get_header_version major 00:09:13.073 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.073 18:01:05 version -- app/version.sh@17 -- # major=24 00:09:13.073 18:01:05 version -- app/version.sh@18 -- # get_header_version minor 00:09:13.073 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.073 18:01:05 version -- app/version.sh@18 -- # minor=5 00:09:13.073 18:01:05 version -- app/version.sh@19 -- # get_header_version patch 00:09:13.073 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:09:13.073 18:01:05 version -- app/version.sh@19 -- # patch=0 00:09:13.073 18:01:05 version -- app/version.sh@20 -- # get_header_version suffix 00:09:13.073 18:01:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # cut -f2 00:09:13.073 18:01:05 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.073 18:01:05 version -- app/version.sh@20 -- # suffix=-pre 00:09:13.073 18:01:05 version -- app/version.sh@22 -- # version=24.5 00:09:13.073 18:01:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:13.073 18:01:05 version -- app/version.sh@28 -- # version=24.5rc0 00:09:13.073 18:01:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:13.073 18:01:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:13.073 18:01:05 version -- app/version.sh@30 -- # py_version=24.5rc0 00:09:13.073 18:01:05 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:09:13.073 00:09:13.073 real 0m0.150s 00:09:13.073 user 0m0.077s 00:09:13.073 sys 0m0.106s 00:09:13.073 ************************************ 00:09:13.073 END TEST version 00:09:13.073 ************************************ 00:09:13.073 18:01:05 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:13.073 18:01:05 version -- common/autotest_common.sh@10 -- # set +x 00:09:13.073 18:01:05 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:09:13.073 18:01:05 -- spdk/autotest.sh@194 -- # uname -s 00:09:13.073 18:01:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:13.073 18:01:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:13.073 18:01:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:13.073 18:01:05 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:13.073 18:01:05 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:13.073 18:01:05 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:13.073 18:01:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:13.073 18:01:05 -- common/autotest_common.sh@10 -- # set +x 00:09:13.073 ************************************ 00:09:13.073 START TEST blockdev_nvme 00:09:13.073 ************************************ 00:09:13.073 18:01:05 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:13.350 * Looking for test storage... 00:09:13.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:13.350 18:01:05 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65656 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:13.350 18:01:05 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 65656 00:09:13.350 18:01:05 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 65656 ']' 00:09:13.350 18:01:05 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.350 18:01:05 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:13.350 18:01:05 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.350 18:01:05 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:13.350 18:01:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:13.350 [2024-05-15 18:01:05.712027] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:13.350 [2024-05-15 18:01:05.712342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65656 ] 00:09:13.608 [2024-05-15 18:01:05.877549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.866 [2024-05-15 18:01:06.209599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.801 18:01:07 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:14.801 18:01:07 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:09:14.801 18:01:07 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:14.801 18:01:07 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:09:14.801 18:01:07 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:14.801 18:01:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:14.801 18:01:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:14.801 18:01:07 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:14.801 18:01:07 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.801 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:15.060 18:01:07 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.060 18:01:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.061 18:01:07 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.061 18:01:07 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:15.061 18:01:07 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:15.061 18:01:07 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e73a1581-0f56-4f29-9d31-7b8c5e75a59a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e73a1581-0f56-4f29-9d31-7b8c5e75a59a",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1317d842-8078-405d-b036-d211cf8264ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1317d842-8078-405d-b036-d211cf8264ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6431af33-8ec7-4b63-baf6-9212d5349e01"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6431af33-8ec7-4b63-baf6-9212d5349e01",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "125da9ab-89c3-4d9f-8d4e-832f77d61a81"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "125da9ab-89c3-4d9f-8d4e-832f77d61a81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b190f83c-be2f-47b9-85ae-e03f154cb816"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b190f83c-be2f-47b9-85ae-e03f154cb816",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "80191a3d-2bc0-42a9-b018-ed799a931bd8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "80191a3d-2bc0-42a9-b018-ed799a931bd8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:15.319 18:01:07 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:15.319 18:01:07 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:09:15.319 18:01:07 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:15.319 18:01:07 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 65656 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 65656 ']' 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 65656 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65656 00:09:15.319 killing process with pid 65656 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65656' 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 65656 00:09:15.319 18:01:07 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 65656 00:09:17.853 18:01:09 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:17.853 18:01:09 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:17.853 18:01:09 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:17.853 18:01:09 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.853 18:01:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.853 ************************************ 00:09:17.853 START TEST bdev_hello_world 00:09:17.853 ************************************ 00:09:17.853 18:01:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:17.853 [2024-05-15 18:01:09.897279] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:17.853 [2024-05-15 18:01:09.897704] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65757 ] 00:09:17.853 [2024-05-15 18:01:10.069538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.853 [2024-05-15 18:01:10.311464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.788 [2024-05-15 18:01:10.947079] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:18.788 [2024-05-15 18:01:10.947159] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:18.788 [2024-05-15 18:01:10.947209] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:18.788 [2024-05-15 18:01:10.950298] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:18.788 [2024-05-15 18:01:10.950919] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:18.788 [2024-05-15 18:01:10.950963] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:18.788 [2024-05-15 18:01:10.951151] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:18.788 00:09:18.788 [2024-05-15 18:01:10.951184] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:19.722 00:09:19.722 real 0m2.132s 00:09:19.722 user 0m1.756s 00:09:19.722 sys 0m0.265s 00:09:19.722 18:01:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.722 18:01:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:19.722 ************************************ 00:09:19.722 END TEST bdev_hello_world 00:09:19.722 ************************************ 00:09:19.722 18:01:11 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:19.722 18:01:11 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:19.722 18:01:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.722 18:01:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.722 ************************************ 00:09:19.722 START TEST bdev_bounds 00:09:19.722 ************************************ 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:09:19.722 Process bdevio pid: 65799 00:09:19.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=65799 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 65799' 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 65799 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 65799 ']' 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:19.722 18:01:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:19.722 [2024-05-15 18:01:12.067458] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:19.722 [2024-05-15 18:01:12.067815] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65799 ] 00:09:19.980 [2024-05-15 18:01:12.229973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.980 [2024-05-15 18:01:12.451952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.980 [2024-05-15 18:01:12.452064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.980 [2024-05-15 18:01:12.452076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.914 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:20.914 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:09:20.914 18:01:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:20.914 I/O targets: 00:09:20.914 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:20.914 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:20.914 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:20.914 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:20.914 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:20.914 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:20.914 00:09:20.914 00:09:20.914 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.914 http://cunit.sourceforge.net/ 00:09:20.914 00:09:20.914 00:09:20.914 Suite: bdevio tests on: Nvme3n1 00:09:20.914 Test: blockdev write read block ...passed 00:09:20.914 Test: blockdev write zeroes read block ...passed 00:09:20.914 Test: blockdev write zeroes read no split ...passed 00:09:20.914 Test: blockdev write zeroes read split ...passed 00:09:20.914 Test: blockdev write zeroes read split partial ...passed 00:09:20.914 Test: blockdev reset ...[2024-05-15 18:01:13.287212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:20.914 passed 00:09:20.914 Test: blockdev write read 8 blocks ...[2024-05-15 18:01:13.290832] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:20.914 passed 00:09:20.914 Test: blockdev write read size > 128k ...passed 00:09:20.914 Test: blockdev write read invalid size ...passed 00:09:20.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:20.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:20.914 Test: blockdev write read max offset ...passed 00:09:20.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:20.914 Test: blockdev writev readv 8 blocks ...passed 00:09:20.914 Test: blockdev writev readv 30 x 1block ...passed 00:09:20.914 Test: blockdev writev readv block ...passed 00:09:20.914 Test: blockdev writev readv size > 128k ...passed 00:09:20.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:20.915 Test: blockdev comparev and writev ...[2024-05-15 18:01:13.299359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28fa0e000 len:0x1000 00:09:20.915 [2024-05-15 18:01:13.299431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:20.915 passed 00:09:20.915 Test: blockdev nvme passthru rw ...passed 00:09:20.915 Test: blockdev nvme passthru vendor specific ...passed 00:09:20.915 Test: blockdev nvme admin passthru ...[2024-05-15 18:01:13.300326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:20.915 [2024-05-15 18:01:13.300372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:20.915 passed 00:09:20.915 Test: blockdev copy ...passed 00:09:20.915 Suite: bdevio tests on: Nvme2n3 00:09:20.915 Test: blockdev write read block ...passed 00:09:20.915 Test: blockdev write zeroes read block ...passed 00:09:20.915 Test: blockdev write zeroes read no split ...passed 00:09:20.915 Test: blockdev write zeroes read split ...passed 00:09:20.915 Test: blockdev write zeroes read split partial ...passed 00:09:20.915 Test: blockdev reset ...[2024-05-15 18:01:13.374150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:20.915 [2024-05-15 18:01:13.378010] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:20.915 passed 00:09:20.915 Test: blockdev write read 8 blocks ...passed 00:09:20.915 Test: blockdev write read size > 128k ...passed 00:09:20.915 Test: blockdev write read invalid size ...passed 00:09:20.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:20.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:20.915 Test: blockdev write read max offset ...passed 00:09:20.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:20.915 Test: blockdev writev readv 8 blocks ...passed 00:09:20.915 Test: blockdev writev readv 30 x 1block ...passed 00:09:20.915 Test: blockdev writev readv block ...passed 00:09:20.915 Test: blockdev writev readv size > 128k ...passed 00:09:20.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:20.915 Test: blockdev comparev and writev ...[2024-05-15 18:01:13.387259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28fa0a000 len:0x1000 00:09:20.915 [2024-05-15 18:01:13.387328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:20.915 passed 00:09:20.915 Test: blockdev nvme passthru rw ...passed 00:09:20.915 Test: blockdev nvme passthru vendor specific ...passed 00:09:20.915 Test: blockdev nvme admin passthru ...[2024-05-15 18:01:13.388289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:20.915 [2024-05-15 18:01:13.388346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:20.915 passed 00:09:20.915 Test: blockdev copy ...passed 00:09:20.915 Suite: bdevio tests on: Nvme2n2 00:09:20.915 Test: blockdev write read block ...passed 00:09:20.915 Test: blockdev write zeroes read block ...passed 00:09:20.915 Test: blockdev write zeroes read no split ...passed 00:09:21.173 Test: blockdev write zeroes read split ...passed 00:09:21.173 Test: blockdev write zeroes read split partial ...passed 00:09:21.173 Test: blockdev reset ...[2024-05-15 18:01:13.464567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:21.173 [2024-05-15 18:01:13.468482] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:21.173 passed 00:09:21.173 Test: blockdev write read 8 blocks ...passed 00:09:21.173 Test: blockdev write read size > 128k ...passed 00:09:21.173 Test: blockdev write read invalid size ...passed 00:09:21.173 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:21.173 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:21.173 Test: blockdev write read max offset ...passed 00:09:21.173 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:21.173 Test: blockdev writev readv 8 blocks ...passed 00:09:21.173 Test: blockdev writev readv 30 x 1block ...passed 00:09:21.173 Test: blockdev writev readv block ...passed 00:09:21.173 Test: blockdev writev readv size > 128k ...passed 00:09:21.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:21.173 Test: blockdev comparev and writev ...[2024-05-15 18:01:13.477396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283806000 len:0x1000 00:09:21.173 [2024-05-15 18:01:13.477451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:21.173 passed 00:09:21.173 Test: blockdev nvme passthru rw ...passed 00:09:21.173 Test: blockdev nvme passthru vendor specific ...passed 00:09:21.174 Test: blockdev nvme admin passthru ...[2024-05-15 18:01:13.478340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:21.174 [2024-05-15 18:01:13.478383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:21.174 passed 00:09:21.174 Test: blockdev copy ...passed 00:09:21.174 Suite: bdevio tests on: Nvme2n1 00:09:21.174 Test: blockdev write read block ...passed 00:09:21.174 Test: blockdev write zeroes read block ...passed 00:09:21.174 Test: blockdev write zeroes read no split ...passed 00:09:21.174 Test: blockdev write zeroes read split ...passed 00:09:21.174 Test: blockdev write zeroes read split partial ...passed 00:09:21.174 Test: blockdev reset ...[2024-05-15 18:01:13.551104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:21.174 [2024-05-15 18:01:13.554984] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:21.174 passed 00:09:21.174 Test: blockdev write read 8 blocks ...passed 00:09:21.174 Test: blockdev write read size > 128k ...passed 00:09:21.174 Test: blockdev write read invalid size ...passed 00:09:21.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:21.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:21.174 Test: blockdev write read max offset ...passed 00:09:21.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:21.174 Test: blockdev writev readv 8 blocks ...passed 00:09:21.174 Test: blockdev writev readv 30 x 1block ...passed 00:09:21.174 Test: blockdev writev readv block ...passed 00:09:21.174 Test: blockdev writev readv size > 128k ...passed 00:09:21.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:21.174 Test: blockdev comparev and writev ...[2024-05-15 18:01:13.563724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283801000 len:0x1000 00:09:21.174 [2024-05-15 18:01:13.563792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:21.174 passed 00:09:21.174 Test: blockdev nvme passthru rw ...passed 00:09:21.174 Test: blockdev nvme passthru vendor specific ...[2024-05-15 18:01:13.564642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:21.174 passed 00:09:21.174 Test: blockdev nvme admin passthru ...[2024-05-15 18:01:13.564685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:21.174 passed 00:09:21.174 Test: blockdev copy ...passed 00:09:21.174 Suite: bdevio tests on: Nvme1n1 00:09:21.174 Test: blockdev write read block ...passed 00:09:21.174 Test: blockdev write zeroes read block ...passed 00:09:21.174 Test: blockdev write zeroes read no split ...passed 00:09:21.174 Test: blockdev write zeroes read split ...passed 00:09:21.174 Test: blockdev write zeroes read split partial ...passed 00:09:21.174 Test: blockdev reset ...[2024-05-15 18:01:13.633030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:21.174 [2024-05-15 18:01:13.636750] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:21.174 passed 00:09:21.174 Test: blockdev write read 8 blocks ...passed 00:09:21.174 Test: blockdev write read size > 128k ...passed 00:09:21.174 Test: blockdev write read invalid size ...passed 00:09:21.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:21.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:21.174 Test: blockdev write read max offset ...passed 00:09:21.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:21.174 Test: blockdev writev readv 8 blocks ...passed 00:09:21.174 Test: blockdev writev readv 30 x 1block ...passed 00:09:21.174 Test: blockdev writev readv block ...passed 00:09:21.174 Test: blockdev writev readv size > 128k ...passed 00:09:21.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:21.174 Test: blockdev comparev and writev ...[2024-05-15 18:01:13.645248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x293406000 len:0x1000 00:09:21.174 [2024-05-15 18:01:13.645316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:21.174 passed 00:09:21.174 Test: blockdev nvme passthru rw ...passed 00:09:21.174 Test: blockdev nvme passthru vendor specific ...passed 00:09:21.174 Test: blockdev nvme admin passthru ...[2024-05-15 18:01:13.646090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:21.174 [2024-05-15 18:01:13.646135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:21.174 passed 00:09:21.174 Test: blockdev copy ...passed 00:09:21.174 Suite: bdevio tests on: Nvme0n1 00:09:21.174 Test: blockdev write read block ...passed 00:09:21.174 Test: blockdev write zeroes read block ...passed 00:09:21.174 Test: blockdev write zeroes read no split ...passed 00:09:21.431 Test: blockdev write zeroes read split ...passed 00:09:21.431 Test: blockdev write zeroes read split partial ...passed 00:09:21.431 Test: blockdev reset ...[2024-05-15 18:01:13.708607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:21.431 [2024-05-15 18:01:13.712265] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:21.431 passed 00:09:21.431 Test: blockdev write read 8 blocks ...passed 00:09:21.431 Test: blockdev write read size > 128k ...passed 00:09:21.431 Test: blockdev write read invalid size ...passed 00:09:21.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:21.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:21.431 Test: blockdev write read max offset ...passed 00:09:21.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:21.431 Test: blockdev writev readv 8 blocks ...passed 00:09:21.431 Test: blockdev writev readv 30 x 1block ...passed 00:09:21.431 Test: blockdev writev readv block ...passed 00:09:21.431 Test: blockdev writev readv size > 128k ...passed 00:09:21.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:21.431 Test: blockdev comparev and writev ...passed 00:09:21.431 Test: blockdev nvme passthru rw ...[2024-05-15 18:01:13.720681] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:21.431 separate metadata which is not supported yet. 00:09:21.431 passed 00:09:21.431 Test: blockdev nvme passthru vendor specific ...passed 00:09:21.431 Test: blockdev nvme admin passthru ...[2024-05-15 18:01:13.721185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:21.431 [2024-05-15 18:01:13.721245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:21.431 passed 00:09:21.431 Test: blockdev copy ...passed 00:09:21.431 00:09:21.431 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.431 suites 6 6 n/a 0 0 00:09:21.431 tests 138 138 138 0 0 00:09:21.431 asserts 893 893 893 0 n/a 00:09:21.431 00:09:21.431 Elapsed time = 1.388 seconds 00:09:21.431 0 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 65799 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 65799 ']' 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 65799 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65799 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65799' 00:09:21.431 killing process with pid 65799 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 65799 00:09:21.431 18:01:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 65799 00:09:22.365 18:01:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:22.365 ************************************ 00:09:22.365 END TEST bdev_bounds 00:09:22.365 ************************************ 00:09:22.365 00:09:22.365 real 0m2.759s 00:09:22.365 user 0m6.824s 00:09:22.365 sys 0m0.412s 00:09:22.365 18:01:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.365 18:01:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:22.365 18:01:14 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:22.365 18:01:14 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:22.365 18:01:14 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.365 18:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.365 ************************************ 00:09:22.365 START TEST bdev_nbd 00:09:22.365 ************************************ 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=65858 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 65858 /var/tmp/spdk-nbd.sock 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 65858 ']' 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:22.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:22.365 18:01:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:22.624 [2024-05-15 18:01:14.892532] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:22.624 [2024-05-15 18:01:14.892679] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.624 [2024-05-15 18:01:15.072197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.882 [2024-05-15 18:01:15.318240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:23.826 18:01:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:23.826 1+0 records in 00:09:23.826 1+0 records out 00:09:23.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417559 s, 9.8 MB/s 00:09:23.826 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.827 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:23.827 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:23.827 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:23.827 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:23.827 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:23.827 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:23.827 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.083 1+0 records in 00:09:24.083 1+0 records out 00:09:24.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624907 s, 6.6 MB/s 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:24.083 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:24.084 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.342 1+0 records in 00:09:24.342 1+0 records out 00:09:24.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555081 s, 7.4 MB/s 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:24.342 18:01:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:24.607 1+0 records in 00:09:24.607 1+0 records out 00:09:24.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939413 s, 4.4 MB/s 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:24.607 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.174 1+0 records in 00:09:25.174 1+0 records out 00:09:25.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973676 s, 4.2 MB/s 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.174 1+0 records in 00:09:25.174 1+0 records out 00:09:25.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111225 s, 3.7 MB/s 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:25.174 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.434 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd0", 00:09:25.434 "bdev_name": "Nvme0n1" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd1", 00:09:25.434 "bdev_name": "Nvme1n1" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd2", 00:09:25.434 "bdev_name": "Nvme2n1" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd3", 00:09:25.434 "bdev_name": "Nvme2n2" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd4", 00:09:25.434 "bdev_name": "Nvme2n3" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd5", 00:09:25.434 "bdev_name": "Nvme3n1" 00:09:25.434 } 00:09:25.434 ]' 00:09:25.434 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:25.434 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd0", 00:09:25.434 "bdev_name": "Nvme0n1" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd1", 00:09:25.434 "bdev_name": "Nvme1n1" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd2", 00:09:25.434 "bdev_name": "Nvme2n1" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd3", 00:09:25.434 "bdev_name": "Nvme2n2" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd4", 00:09:25.434 "bdev_name": "Nvme2n3" 00:09:25.434 }, 00:09:25.434 { 00:09:25.434 "nbd_device": "/dev/nbd5", 00:09:25.434 "bdev_name": "Nvme3n1" 00:09:25.434 } 00:09:25.434 ]' 00:09:25.434 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:25.895 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:25.895 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.895 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:25.895 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:25.895 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:25.895 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.895 18:01:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.895 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.165 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:26.423 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:26.423 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.424 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:26.682 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:26.682 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:26.682 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:26.682 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.682 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.682 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:26.682 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:26.683 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.683 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.683 18:01:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.941 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:27.198 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.199 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:27.199 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.199 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:27.457 18:01:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:27.715 /dev/nbd0 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.715 1+0 records in 00:09:27.715 1+0 records out 00:09:27.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056825 s, 7.2 MB/s 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:27.715 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:27.973 /dev/nbd1 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.973 1+0 records in 00:09:27.973 1+0 records out 00:09:27.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537898 s, 7.6 MB/s 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:27.973 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:28.232 /dev/nbd10 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.232 1+0 records in 00:09:28.232 1+0 records out 00:09:28.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765017 s, 5.4 MB/s 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:28.232 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:28.496 /dev/nbd11 00:09:28.496 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:28.496 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.497 1+0 records in 00:09:28.497 1+0 records out 00:09:28.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000942106 s, 4.3 MB/s 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:28.497 18:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:28.758 /dev/nbd12 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.758 1+0 records in 00:09:28.758 1+0 records out 00:09:28.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933138 s, 4.4 MB/s 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:28.758 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:29.016 /dev/nbd13 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.016 1+0 records in 00:09:29.016 1+0 records out 00:09:29.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840273 s, 4.9 MB/s 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.016 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.274 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:29.274 { 00:09:29.274 "nbd_device": "/dev/nbd0", 00:09:29.274 "bdev_name": "Nvme0n1" 00:09:29.274 }, 00:09:29.274 { 00:09:29.274 "nbd_device": "/dev/nbd1", 00:09:29.274 "bdev_name": "Nvme1n1" 00:09:29.274 }, 00:09:29.274 { 00:09:29.274 "nbd_device": "/dev/nbd10", 00:09:29.274 "bdev_name": "Nvme2n1" 00:09:29.274 }, 00:09:29.274 { 00:09:29.274 "nbd_device": "/dev/nbd11", 00:09:29.274 "bdev_name": "Nvme2n2" 00:09:29.274 }, 00:09:29.274 { 00:09:29.274 "nbd_device": "/dev/nbd12", 00:09:29.274 "bdev_name": "Nvme2n3" 00:09:29.274 }, 00:09:29.274 { 00:09:29.275 "nbd_device": "/dev/nbd13", 00:09:29.275 "bdev_name": "Nvme3n1" 00:09:29.275 } 00:09:29.275 ]' 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:29.275 { 00:09:29.275 "nbd_device": "/dev/nbd0", 00:09:29.275 "bdev_name": "Nvme0n1" 00:09:29.275 }, 00:09:29.275 { 00:09:29.275 "nbd_device": "/dev/nbd1", 00:09:29.275 "bdev_name": "Nvme1n1" 00:09:29.275 }, 00:09:29.275 { 00:09:29.275 "nbd_device": "/dev/nbd10", 00:09:29.275 "bdev_name": "Nvme2n1" 00:09:29.275 }, 00:09:29.275 { 00:09:29.275 "nbd_device": "/dev/nbd11", 00:09:29.275 "bdev_name": "Nvme2n2" 00:09:29.275 }, 00:09:29.275 { 00:09:29.275 "nbd_device": "/dev/nbd12", 00:09:29.275 "bdev_name": "Nvme2n3" 00:09:29.275 }, 00:09:29.275 { 00:09:29.275 "nbd_device": "/dev/nbd13", 00:09:29.275 "bdev_name": "Nvme3n1" 00:09:29.275 } 00:09:29.275 ]' 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:29.275 /dev/nbd1 00:09:29.275 /dev/nbd10 00:09:29.275 /dev/nbd11 00:09:29.275 /dev/nbd12 00:09:29.275 /dev/nbd13' 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:29.275 /dev/nbd1 00:09:29.275 /dev/nbd10 00:09:29.275 /dev/nbd11 00:09:29.275 /dev/nbd12 00:09:29.275 /dev/nbd13' 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:29.275 256+0 records in 00:09:29.275 256+0 records out 00:09:29.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00770981 s, 136 MB/s 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.275 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:29.533 256+0 records in 00:09:29.533 256+0 records out 00:09:29.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163203 s, 6.4 MB/s 00:09:29.533 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.533 18:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:29.791 256+0 records in 00:09:29.791 256+0 records out 00:09:29.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154713 s, 6.8 MB/s 00:09:29.791 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.791 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:29.791 256+0 records in 00:09:29.791 256+0 records out 00:09:29.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178281 s, 5.9 MB/s 00:09:29.791 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.791 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:30.049 256+0 records in 00:09:30.049 256+0 records out 00:09:30.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177147 s, 5.9 MB/s 00:09:30.049 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.049 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:30.308 256+0 records in 00:09:30.308 256+0 records out 00:09:30.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17842 s, 5.9 MB/s 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:30.308 256+0 records in 00:09:30.308 256+0 records out 00:09:30.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178481 s, 5.9 MB/s 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:30.308 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.570 18:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.828 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.093 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.350 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:31.609 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:31.609 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:31.609 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:31.609 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.609 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.609 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:31.609 18:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:31.609 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.609 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.609 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.868 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.127 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:32.385 18:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:32.644 malloc_lvol_verify 00:09:32.644 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:32.902 0cd3b495-eb95-4cfc-9cb7-985ef7d5cd14 00:09:32.902 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:33.161 f25e2563-675a-4dde-89f4-7b5a7d48624e 00:09:33.161 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:33.419 /dev/nbd0 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:33.419 mke2fs 1.46.5 (30-Dec-2021) 00:09:33.419 Discarding device blocks: 0/4096 done 00:09:33.419 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:33.419 00:09:33.419 Allocating group tables: 0/1 done 00:09:33.419 Writing inode tables: 0/1 done 00:09:33.419 Creating journal (1024 blocks): done 00:09:33.419 Writing superblocks and filesystem accounting information: 0/1 done 00:09:33.419 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.419 18:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 65858 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 65858 ']' 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 65858 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65858 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:33.677 killing process with pid 65858 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65858' 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 65858 00:09:33.677 18:01:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 65858 00:09:35.050 18:01:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:35.050 00:09:35.050 real 0m12.392s 00:09:35.050 user 0m17.243s 00:09:35.050 sys 0m3.956s 00:09:35.050 18:01:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.050 ************************************ 00:09:35.050 END TEST bdev_nbd 00:09:35.050 ************************************ 00:09:35.050 18:01:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:35.050 18:01:27 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:35.050 skipping fio tests on NVMe due to multi-ns failures. 00:09:35.050 18:01:27 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:09:35.050 18:01:27 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:35.050 18:01:27 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:35.050 18:01:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:35.050 18:01:27 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:35.050 18:01:27 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:35.050 18:01:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.050 ************************************ 00:09:35.050 START TEST bdev_verify 00:09:35.050 ************************************ 00:09:35.050 18:01:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:35.050 [2024-05-15 18:01:27.338518] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:35.050 [2024-05-15 18:01:27.338693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66266 ] 00:09:35.050 [2024-05-15 18:01:27.513077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.331 [2024-05-15 18:01:27.757278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.331 [2024-05-15 18:01:27.757283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.264 Running I/O for 5 seconds... 00:09:41.529 00:09:41.529 Latency(us) 00:09:41.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.529 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x0 length 0xbd0bd 00:09:41.529 Nvme0n1 : 5.05 1545.51 6.04 0.00 0.00 82589.68 17873.45 76260.07 00:09:41.529 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:41.529 Nvme0n1 : 5.07 1538.94 6.01 0.00 0.00 82978.31 15371.17 76736.70 00:09:41.529 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x0 length 0xa0000 00:09:41.529 Nvme1n1 : 5.05 1545.00 6.04 0.00 0.00 82490.21 18707.55 68634.07 00:09:41.529 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0xa0000 length 0xa0000 00:09:41.529 Nvme1n1 : 5.08 1538.38 6.01 0.00 0.00 82743.71 15490.33 68157.44 00:09:41.529 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x0 length 0x80000 00:09:41.529 Nvme2n1 : 5.06 1544.49 6.03 0.00 0.00 82364.03 17277.67 66727.56 00:09:41.529 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x80000 length 0x80000 00:09:41.529 Nvme2n1 : 5.08 1537.82 6.01 0.00 0.00 82619.49 15490.33 65297.69 00:09:41.529 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x0 length 0x80000 00:09:41.529 Nvme2n2 : 5.06 1543.98 6.03 0.00 0.00 82186.87 16443.58 67204.19 00:09:41.529 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x80000 length 0x80000 00:09:41.529 Nvme2n2 : 5.08 1537.21 6.00 0.00 0.00 82474.59 15371.17 66727.56 00:09:41.529 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x0 length 0x80000 00:09:41.529 Nvme2n3 : 5.07 1552.67 6.07 0.00 0.00 81581.05 4617.31 70063.94 00:09:41.529 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x80000 length 0x80000 00:09:41.529 Nvme2n3 : 5.08 1536.36 6.00 0.00 0.00 82334.41 15013.70 69587.32 00:09:41.529 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x0 length 0x20000 00:09:41.529 Nvme3n1 : 5.08 1562.55 6.10 0.00 0.00 80975.60 6076.97 72447.07 00:09:41.529 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:41.529 Verification LBA range: start 0x20000 length 0x20000 00:09:41.529 Nvme3n1 : 5.08 1535.82 6.00 0.00 0.00 82202.15 10128.29 72447.07 00:09:41.529 =================================================================================================================== 00:09:41.529 Total : 18518.74 72.34 0.00 0.00 82292.72 4617.31 76736.70 00:09:42.907 00:09:42.907 real 0m7.733s 00:09:42.907 user 0m14.007s 00:09:42.907 sys 0m0.357s 00:09:42.907 18:01:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:42.907 18:01:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:42.907 ************************************ 00:09:42.907 END TEST bdev_verify 00:09:42.907 ************************************ 00:09:42.907 18:01:35 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:42.907 18:01:35 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:42.907 18:01:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:42.907 18:01:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.907 ************************************ 00:09:42.907 START TEST bdev_verify_big_io 00:09:42.907 ************************************ 00:09:42.907 18:01:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:42.907 [2024-05-15 18:01:35.110457] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:42.907 [2024-05-15 18:01:35.110628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66364 ] 00:09:42.907 [2024-05-15 18:01:35.272734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.169 [2024-05-15 18:01:35.499644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.169 [2024-05-15 18:01:35.499644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.108 Running I/O for 5 seconds... 00:09:50.684 00:09:50.684 Latency(us) 00:09:50.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.684 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x0 length 0xbd0b 00:09:50.684 Nvme0n1 : 5.68 127.68 7.98 0.00 0.00 975571.94 34317.03 1044763.00 00:09:50.684 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:50.684 Nvme0n1 : 5.75 129.34 8.08 0.00 0.00 900695.27 48615.80 1525201.45 00:09:50.684 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x0 length 0xa000 00:09:50.684 Nvme1n1 : 5.72 130.55 8.16 0.00 0.00 928097.43 59101.56 869364.83 00:09:50.684 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0xa000 length 0xa000 00:09:50.684 Nvme1n1 : 5.82 136.36 8.52 0.00 0.00 832857.36 22520.55 1540453.47 00:09:50.684 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x0 length 0x8000 00:09:50.684 Nvme2n1 : 5.72 127.90 7.99 0.00 0.00 917061.72 59101.56 953250.91 00:09:50.684 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x8000 length 0x8000 00:09:50.684 Nvme2n1 : 5.85 140.29 8.77 0.00 0.00 786052.15 47185.92 1563331.49 00:09:50.684 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x0 length 0x8000 00:09:50.684 Nvme2n2 : 5.73 125.04 7.82 0.00 0.00 918827.20 41228.10 1875997.79 00:09:50.684 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x8000 length 0x8000 00:09:50.684 Nvme2n2 : 5.87 164.54 10.28 0.00 0.00 655099.05 688.87 1586209.51 00:09:50.684 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x0 length 0x8000 00:09:50.684 Nvme2n3 : 5.77 136.16 8.51 0.00 0.00 817814.39 42419.67 941811.90 00:09:50.684 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x8000 length 0x8000 00:09:50.684 Nvme2n3 : 5.70 134.80 8.42 0.00 0.00 918029.81 21567.30 972315.93 00:09:50.684 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x0 length 0x2000 00:09:50.684 Nvme3n1 : 5.85 156.44 9.78 0.00 0.00 696468.06 1295.83 960876.92 00:09:50.684 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:50.684 Verification LBA range: start 0x2000 length 0x2000 00:09:50.684 Nvme3n1 : 5.70 134.73 8.42 0.00 0.00 890913.20 108193.98 827421.79 00:09:50.684 =================================================================================================================== 00:09:50.684 Total : 1643.83 102.74 0.00 0.00 844624.09 688.87 1875997.79 00:09:51.624 00:09:51.624 real 0m8.877s 00:09:51.624 user 0m16.280s 00:09:51.624 sys 0m0.342s 00:09:51.624 18:01:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:51.624 18:01:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:51.624 ************************************ 00:09:51.624 END TEST bdev_verify_big_io 00:09:51.624 ************************************ 00:09:51.624 18:01:43 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:51.624 18:01:43 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:51.624 18:01:43 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.624 18:01:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.624 ************************************ 00:09:51.624 START TEST bdev_write_zeroes 00:09:51.624 ************************************ 00:09:51.624 18:01:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:51.624 [2024-05-15 18:01:44.057916] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:51.624 [2024-05-15 18:01:44.058113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66475 ] 00:09:51.883 [2024-05-15 18:01:44.232900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.142 [2024-05-15 18:01:44.473876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.706 Running I/O for 1 seconds... 00:09:54.075 00:09:54.075 Latency(us) 00:09:54.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.075 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:54.075 Nvme0n1 : 1.02 8257.33 32.26 0.00 0.00 15436.95 11677.32 28120.90 00:09:54.075 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:54.075 Nvme1n1 : 1.02 8243.81 32.20 0.00 0.00 15436.49 12332.68 29193.31 00:09:54.075 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:54.075 Nvme2n1 : 1.02 8272.38 32.31 0.00 0.00 15374.57 10307.03 28120.90 00:09:54.075 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:54.075 Nvme2n2 : 1.02 8259.01 32.26 0.00 0.00 15322.73 10664.49 25141.99 00:09:54.075 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:54.075 Nvme2n3 : 1.02 8246.67 32.21 0.00 0.00 15290.26 10664.49 23116.33 00:09:54.075 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:54.075 Nvme3n1 : 1.03 8281.71 32.35 0.00 0.00 15211.00 7179.17 22282.24 00:09:54.075 =================================================================================================================== 00:09:54.075 Total : 49560.91 193.60 0.00 0.00 15344.93 7179.17 29193.31 00:09:55.010 00:09:55.010 real 0m3.498s 00:09:55.010 user 0m3.071s 00:09:55.010 sys 0m0.298s 00:09:55.010 18:01:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.010 ************************************ 00:09:55.010 18:01:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:55.010 END TEST bdev_write_zeroes 00:09:55.010 ************************************ 00:09:55.010 18:01:47 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:55.010 18:01:47 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:55.010 18:01:47 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.010 18:01:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.010 ************************************ 00:09:55.010 START TEST bdev_json_nonenclosed 00:09:55.010 ************************************ 00:09:55.010 18:01:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:55.268 [2024-05-15 18:01:47.604117] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:55.268 [2024-05-15 18:01:47.604317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66539 ] 00:09:55.527 [2024-05-15 18:01:47.782562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.785 [2024-05-15 18:01:48.068443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.785 [2024-05-15 18:01:48.068579] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:55.785 [2024-05-15 18:01:48.068621] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:55.785 [2024-05-15 18:01:48.068653] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:56.043 00:09:56.043 real 0m0.991s 00:09:56.043 user 0m0.730s 00:09:56.043 sys 0m0.153s 00:09:56.043 18:01:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:56.043 ************************************ 00:09:56.043 END TEST bdev_json_nonenclosed 00:09:56.043 ************************************ 00:09:56.043 18:01:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:56.043 18:01:48 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.043 18:01:48 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:56.043 18:01:48 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:56.043 18:01:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.309 ************************************ 00:09:56.309 START TEST bdev_json_nonarray 00:09:56.309 ************************************ 00:09:56.309 18:01:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.309 [2024-05-15 18:01:48.647282] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:56.309 [2024-05-15 18:01:48.647551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66570 ] 00:09:56.568 [2024-05-15 18:01:48.819012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.826 [2024-05-15 18:01:49.111000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.826 [2024-05-15 18:01:49.111140] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:56.826 [2024-05-15 18:01:49.111174] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:56.826 [2024-05-15 18:01:49.111194] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:57.086 00:09:57.086 real 0m0.989s 00:09:57.086 user 0m0.715s 00:09:57.086 sys 0m0.165s 00:09:57.086 18:01:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:57.086 18:01:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:57.086 ************************************ 00:09:57.086 END TEST bdev_json_nonarray 00:09:57.086 ************************************ 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:57.086 18:01:49 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:57.086 00:09:57.086 real 0m44.065s 00:09:57.086 user 1m5.095s 00:09:57.086 sys 0m6.867s 00:09:57.086 18:01:49 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:57.086 18:01:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.086 ************************************ 00:09:57.086 END TEST blockdev_nvme 00:09:57.086 ************************************ 00:09:57.349 18:01:49 -- spdk/autotest.sh@209 -- # uname -s 00:09:57.349 18:01:49 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:57.349 18:01:49 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:57.349 18:01:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:57.349 18:01:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:57.349 18:01:49 -- common/autotest_common.sh@10 -- # set +x 00:09:57.349 ************************************ 00:09:57.349 START TEST blockdev_nvme_gpt 00:09:57.349 ************************************ 00:09:57.349 18:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:57.349 * Looking for test storage... 00:09:57.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66646 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66646 00:09:57.349 18:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 66646 ']' 00:09:57.349 18:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.349 18:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:57.349 18:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:57.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.349 18:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.349 18:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:57.349 18:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:57.614 [2024-05-15 18:01:49.853939] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:57.614 [2024-05-15 18:01:49.854120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66646 ] 00:09:57.614 [2024-05-15 18:01:50.030961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.872 [2024-05-15 18:01:50.288179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.805 18:01:51 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:58.805 18:01:51 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:09:58.805 18:01:51 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:58.805 18:01:51 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:09:58.805 18:01:51 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:59.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.321 Waiting for block devices as requested 00:09:59.321 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.321 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.321 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.579 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.861 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:04.861 18:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:04.861 18:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:10:04.861 18:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:10:04.861 18:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:10:04.861 18:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:04.861 18:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:10:04.861 BYT; 00:10:04.861 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:10:04.861 BYT; 00:10:04.861 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:04.861 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:04.861 18:01:57 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:04.862 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:04.862 18:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:10:05.799 The operation has completed successfully. 00:10:05.799 18:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:10:06.734 The operation has completed successfully. 00:10:06.734 18:01:59 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:07.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.897 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.897 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.897 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.897 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:08.156 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:10:08.156 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.156 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.156 [] 00:10:08.156 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.157 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:10:08.157 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:08.157 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:08.157 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:08.157 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:08.157 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.157 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.415 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:10:08.416 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.416 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.675 18:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.675 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:10:08.675 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:10:08.676 18:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "259e022c-35d9-4c06-9e3c-09c559293578"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "259e022c-35d9-4c06-9e3c-09c559293578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9e54e528-7a31-4f9f-b855-5e4aaaf1d763"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9e54e528-7a31-4f9f-b855-5e4aaaf1d763",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "116b54a9-f634-4f8b-b07f-1b8d137331bc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "116b54a9-f634-4f8b-b07f-1b8d137331bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "587b90fb-9d84-42ec-91f3-ac58aa9d9a6a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "587b90fb-9d84-42ec-91f3-ac58aa9d9a6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "886f4444-735a-4dba-85d6-b69ecf1e6ab3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "886f4444-735a-4dba-85d6-b69ecf1e6ab3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:08.676 18:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:10:08.676 18:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:10:08.676 18:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:10:08.676 18:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 66646 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 66646 ']' 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 66646 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66646 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:08.676 killing process with pid 66646 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66646' 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 66646 00:10:08.676 18:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 66646 00:10:11.268 18:02:03 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:11.268 18:02:03 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:11.268 18:02:03 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:10:11.268 18:02:03 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:11.268 18:02:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:11.268 ************************************ 00:10:11.268 START TEST bdev_hello_world 00:10:11.268 ************************************ 00:10:11.268 18:02:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:11.268 [2024-05-15 18:02:03.258998] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:11.268 [2024-05-15 18:02:03.259226] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67279 ] 00:10:11.268 [2024-05-15 18:02:03.431942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.268 [2024-05-15 18:02:03.683306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.203 [2024-05-15 18:02:04.354838] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:12.203 [2024-05-15 18:02:04.354912] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:10:12.203 [2024-05-15 18:02:04.354955] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:12.203 [2024-05-15 18:02:04.358033] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:12.203 [2024-05-15 18:02:04.358702] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:12.204 [2024-05-15 18:02:04.358746] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:12.204 [2024-05-15 18:02:04.358993] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:12.204 00:10:12.204 [2024-05-15 18:02:04.359033] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:13.139 00:10:13.139 real 0m2.398s 00:10:13.139 user 0m1.975s 00:10:13.139 sys 0m0.309s 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.139 ************************************ 00:10:13.139 END TEST bdev_hello_world 00:10:13.139 ************************************ 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 18:02:05 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:10:13.139 18:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:13.139 18:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.139 18:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:13.139 ************************************ 00:10:13.139 START TEST bdev_bounds 00:10:13.139 ************************************ 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=67321 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:13.139 Process bdevio pid: 67321 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 67321' 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 67321 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 67321 ']' 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:13.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:13.139 18:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:13.399 [2024-05-15 18:02:05.696246] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:13.399 [2024-05-15 18:02:05.696437] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67321 ] 00:10:13.399 [2024-05-15 18:02:05.862083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.720 [2024-05-15 18:02:06.089860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.720 [2024-05-15 18:02:06.089997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.720 [2024-05-15 18:02:06.090021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.289 18:02:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:14.289 18:02:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:10:14.289 18:02:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:14.547 I/O targets: 00:10:14.547 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:14.547 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:14.547 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:14.547 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:14.547 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:14.547 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:14.547 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:14.547 00:10:14.547 00:10:14.547 CUnit - A unit testing framework for C - Version 2.1-3 00:10:14.547 http://cunit.sourceforge.net/ 00:10:14.547 00:10:14.547 00:10:14.547 Suite: bdevio tests on: Nvme3n1 00:10:14.547 Test: blockdev write read block ...passed 00:10:14.547 Test: blockdev write zeroes read block ...passed 00:10:14.547 Test: blockdev write zeroes read no split ...passed 00:10:14.547 Test: blockdev write zeroes read split ...passed 00:10:14.547 Test: blockdev write zeroes read split partial ...passed 00:10:14.547 Test: blockdev reset ...[2024-05-15 18:02:06.924587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:10:14.547 [2024-05-15 18:02:06.928341] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:14.547 passed 00:10:14.547 Test: blockdev write read 8 blocks ...passed 00:10:14.547 Test: blockdev write read size > 128k ...passed 00:10:14.547 Test: blockdev write read invalid size ...passed 00:10:14.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:14.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:14.547 Test: blockdev write read max offset ...passed 00:10:14.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:14.547 Test: blockdev writev readv 8 blocks ...passed 00:10:14.547 Test: blockdev writev readv 30 x 1block ...passed 00:10:14.547 Test: blockdev writev readv block ...passed 00:10:14.547 Test: blockdev writev readv size > 128k ...passed 00:10:14.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:14.547 Test: blockdev comparev and writev ...[2024-05-15 18:02:06.936899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x289c0a000 len:0x1000 00:10:14.547 [2024-05-15 18:02:06.936967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:14.547 passed 00:10:14.547 Test: blockdev nvme passthru rw ...passed 00:10:14.547 Test: blockdev nvme passthru vendor specific ...[2024-05-15 18:02:06.937793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:14.547 [2024-05-15 18:02:06.937838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:14.547 passed 00:10:14.547 Test: blockdev nvme admin passthru ...passed 00:10:14.547 Test: blockdev copy ...passed 00:10:14.547 Suite: bdevio tests on: Nvme2n3 00:10:14.547 Test: blockdev write read block ...passed 00:10:14.547 Test: blockdev write zeroes read block ...passed 00:10:14.547 Test: blockdev write zeroes read no split ...passed 00:10:14.547 Test: blockdev write zeroes read split ...passed 00:10:14.547 Test: blockdev write zeroes read split partial ...passed 00:10:14.547 Test: blockdev reset ...[2024-05-15 18:02:07.007211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:14.547 [2024-05-15 18:02:07.011395] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:14.547 passed 00:10:14.547 Test: blockdev write read 8 blocks ...passed 00:10:14.547 Test: blockdev write read size > 128k ...passed 00:10:14.547 Test: blockdev write read invalid size ...passed 00:10:14.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:14.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:14.547 Test: blockdev write read max offset ...passed 00:10:14.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:14.547 Test: blockdev writev readv 8 blocks ...passed 00:10:14.547 Test: blockdev writev readv 30 x 1block ...passed 00:10:14.547 Test: blockdev writev readv block ...passed 00:10:14.547 Test: blockdev writev readv size > 128k ...passed 00:10:14.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:14.547 Test: blockdev comparev and writev ...[2024-05-15 18:02:07.019042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x268f04000 len:0x1000 00:10:14.547 [2024-05-15 18:02:07.019099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:14.547 passed 00:10:14.547 Test: blockdev nvme passthru rw ...passed 00:10:14.547 Test: blockdev nvme passthru vendor specific ...passed 00:10:14.547 Test: blockdev nvme admin passthru ...[2024-05-15 18:02:07.019925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:14.547 [2024-05-15 18:02:07.019967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:14.547 passed 00:10:14.547 Test: blockdev copy ...passed 00:10:14.547 Suite: bdevio tests on: Nvme2n2 00:10:14.547 Test: blockdev write read block ...passed 00:10:14.547 Test: blockdev write zeroes read block ...passed 00:10:14.547 Test: blockdev write zeroes read no split ...passed 00:10:14.806 Test: blockdev write zeroes read split ...passed 00:10:14.806 Test: blockdev write zeroes read split partial ...passed 00:10:14.806 Test: blockdev reset ...[2024-05-15 18:02:07.086762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:14.806 [2024-05-15 18:02:07.090606] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:14.806 passed 00:10:14.806 Test: blockdev write read 8 blocks ...passed 00:10:14.806 Test: blockdev write read size > 128k ...passed 00:10:14.806 Test: blockdev write read invalid size ...passed 00:10:14.806 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:14.806 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:14.806 Test: blockdev write read max offset ...passed 00:10:14.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:14.806 Test: blockdev writev readv 8 blocks ...passed 00:10:14.806 Test: blockdev writev readv 30 x 1block ...passed 00:10:14.806 Test: blockdev writev readv block ...passed 00:10:14.806 Test: blockdev writev readv size > 128k ...passed 00:10:14.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:14.806 Test: blockdev comparev and writev ...[2024-05-15 18:02:07.098361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x268f04000 len:0x1000 00:10:14.806 [2024-05-15 18:02:07.098431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:14.806 passed 00:10:14.806 Test: blockdev nvme passthru rw ...passed 00:10:14.806 Test: blockdev nvme passthru vendor specific ...[2024-05-15 18:02:07.099232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:14.806 [2024-05-15 18:02:07.099272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:14.806 passed 00:10:14.806 Test: blockdev nvme admin passthru ...passed 00:10:14.806 Test: blockdev copy ...passed 00:10:14.806 Suite: bdevio tests on: Nvme2n1 00:10:14.806 Test: blockdev write read block ...passed 00:10:14.806 Test: blockdev write zeroes read block ...passed 00:10:14.806 Test: blockdev write zeroes read no split ...passed 00:10:14.806 Test: blockdev write zeroes read split ...passed 00:10:14.806 Test: blockdev write zeroes read split partial ...passed 00:10:14.806 Test: blockdev reset ...[2024-05-15 18:02:07.163145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:14.806 [2024-05-15 18:02:07.166921] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:14.806 passed 00:10:14.806 Test: blockdev write read 8 blocks ...passed 00:10:14.806 Test: blockdev write read size > 128k ...passed 00:10:14.806 Test: blockdev write read invalid size ...passed 00:10:14.806 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:14.806 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:14.806 Test: blockdev write read max offset ...passed 00:10:14.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:14.806 Test: blockdev writev readv 8 blocks ...passed 00:10:14.806 Test: blockdev writev readv 30 x 1block ...passed 00:10:14.806 Test: blockdev writev readv block ...passed 00:10:14.806 Test: blockdev writev readv size > 128k ...passed 00:10:14.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:14.806 Test: blockdev comparev and writev ...[2024-05-15 18:02:07.174707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x298a3c000 len:0x1000 00:10:14.806 [2024-05-15 18:02:07.174764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:14.806 passed 00:10:14.806 Test: blockdev nvme passthru rw ...passed 00:10:14.806 Test: blockdev nvme passthru vendor specific ...passed 00:10:14.806 Test: blockdev nvme admin passthru ...[2024-05-15 18:02:07.175586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:14.806 [2024-05-15 18:02:07.175627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:14.806 passed 00:10:14.806 Test: blockdev copy ...passed 00:10:14.806 Suite: bdevio tests on: Nvme1n1 00:10:14.806 Test: blockdev write read block ...passed 00:10:14.806 Test: blockdev write zeroes read block ...passed 00:10:14.806 Test: blockdev write zeroes read no split ...passed 00:10:14.806 Test: blockdev write zeroes read split ...passed 00:10:14.806 Test: blockdev write zeroes read split partial ...passed 00:10:14.806 Test: blockdev reset ...[2024-05-15 18:02:07.241602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:14.806 [2024-05-15 18:02:07.245276] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:14.806 passed 00:10:14.806 Test: blockdev write read 8 blocks ...passed 00:10:14.806 Test: blockdev write read size > 128k ...passed 00:10:14.806 Test: blockdev write read invalid size ...passed 00:10:14.806 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:14.806 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:14.806 Test: blockdev write read max offset ...passed 00:10:14.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:14.806 Test: blockdev writev readv 8 blocks ...passed 00:10:14.806 Test: blockdev writev readv 30 x 1block ...passed 00:10:14.806 Test: blockdev writev readv block ...passed 00:10:14.806 Test: blockdev writev readv size > 128k ...passed 00:10:14.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:14.806 Test: blockdev comparev and writev ...[2024-05-15 18:02:07.253431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x298a38000 len:0x1000 00:10:14.806 [2024-05-15 18:02:07.253498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:14.806 passed 00:10:14.806 Test: blockdev nvme passthru rw ...passed 00:10:14.806 Test: blockdev nvme passthru vendor specific ...[2024-05-15 18:02:07.254467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:14.806 [2024-05-15 18:02:07.254509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:14.806 passed 00:10:14.806 Test: blockdev nvme admin passthru ...passed 00:10:14.806 Test: blockdev copy ...passed 00:10:14.806 Suite: bdevio tests on: Nvme0n1p2 00:10:14.806 Test: blockdev write read block ...passed 00:10:14.806 Test: blockdev write zeroes read block ...passed 00:10:14.806 Test: blockdev write zeroes read no split ...passed 00:10:14.806 Test: blockdev write zeroes read split ...passed 00:10:15.065 Test: blockdev write zeroes read split partial ...passed 00:10:15.065 Test: blockdev reset ...[2024-05-15 18:02:07.320430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:15.065 [2024-05-15 18:02:07.324022] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:15.065 passed 00:10:15.065 Test: blockdev write read 8 blocks ...passed 00:10:15.065 Test: blockdev write read size > 128k ...passed 00:10:15.065 Test: blockdev write read invalid size ...passed 00:10:15.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.065 Test: blockdev write read max offset ...passed 00:10:15.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.065 Test: blockdev writev readv 8 blocks ...passed 00:10:15.065 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.065 Test: blockdev writev readv block ...passed 00:10:15.065 Test: blockdev writev readv size > 128k ...passed 00:10:15.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.065 Test: blockdev comparev and writev ...[2024-05-15 18:02:07.331093] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:10:15.065 separate metadata which is not supported yet. 00:10:15.065 passed 00:10:15.065 Test: blockdev nvme passthru rw ...passed 00:10:15.065 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.065 Test: blockdev nvme admin passthru ...passed 00:10:15.065 Test: blockdev copy ...passed 00:10:15.065 Suite: bdevio tests on: Nvme0n1p1 00:10:15.065 Test: blockdev write read block ...passed 00:10:15.065 Test: blockdev write zeroes read block ...passed 00:10:15.065 Test: blockdev write zeroes read no split ...passed 00:10:15.065 Test: blockdev write zeroes read split ...passed 00:10:15.065 Test: blockdev write zeroes read split partial ...passed 00:10:15.065 Test: blockdev reset ...[2024-05-15 18:02:07.384574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:15.065 [2024-05-15 18:02:07.388155] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:15.065 passed 00:10:15.065 Test: blockdev write read 8 blocks ...passed 00:10:15.065 Test: blockdev write read size > 128k ...passed 00:10:15.065 Test: blockdev write read invalid size ...passed 00:10:15.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.065 Test: blockdev write read max offset ...passed 00:10:15.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:15.065 Test: blockdev writev readv 8 blocks ...passed 00:10:15.065 Test: blockdev writev readv 30 x 1block ...passed 00:10:15.065 Test: blockdev writev readv block ...passed 00:10:15.065 Test: blockdev writev readv size > 128k ...passed 00:10:15.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:15.065 Test: blockdev comparev and writev ...passed 00:10:15.065 Test: blockdev nvme passthru rw ...passed 00:10:15.065 Test: blockdev nvme passthru vendor specific ...passed 00:10:15.065 Test: blockdev nvme admin passthru ...passed 00:10:15.065 Test: blockdev copy ...[2024-05-15 18:02:07.395128] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:10:15.065 separate metadata which is not supported yet. 00:10:15.065 passed 00:10:15.065 00:10:15.065 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.065 suites 7 7 n/a 0 0 00:10:15.065 tests 161 161 161 0 0 00:10:15.065 asserts 1006 1006 1006 0 n/a 00:10:15.065 00:10:15.065 Elapsed time = 1.449 seconds 00:10:15.065 0 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 67321 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 67321 ']' 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 67321 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67321 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:15.065 killing process with pid 67321 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67321' 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 67321 00:10:15.065 18:02:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 67321 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:10:16.002 00:10:16.002 real 0m2.817s 00:10:16.002 user 0m6.877s 00:10:16.002 sys 0m0.446s 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:16.002 ************************************ 00:10:16.002 END TEST bdev_bounds 00:10:16.002 ************************************ 00:10:16.002 18:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:16.002 18:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:16.002 18:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.002 18:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:16.002 ************************************ 00:10:16.002 START TEST bdev_nbd 00:10:16.002 ************************************ 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=67386 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 67386 /var/tmp/spdk-nbd.sock 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 67386 ']' 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:16.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:16.002 18:02:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:16.261 [2024-05-15 18:02:08.585683] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:16.261 [2024-05-15 18:02:08.585848] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.519 [2024-05-15 18:02:08.762833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.520 [2024-05-15 18:02:08.993184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:17.501 1+0 records in 00:10:17.501 1+0 records out 00:10:17.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059452 s, 6.9 MB/s 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:17.501 18:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.069 1+0 records in 00:10:18.069 1+0 records out 00:10:18.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605064 s, 6.8 MB/s 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:18.069 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.328 1+0 records in 00:10:18.328 1+0 records out 00:10:18.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611407 s, 6.7 MB/s 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:18.328 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.595 1+0 records in 00:10:18.595 1+0 records out 00:10:18.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569383 s, 7.2 MB/s 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:18.595 18:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.858 1+0 records in 00:10:18.858 1+0 records out 00:10:18.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488898 s, 8.4 MB/s 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:18.858 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:19.117 1+0 records in 00:10:19.117 1+0 records out 00:10:19.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000948844 s, 4.3 MB/s 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:19.117 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:19.376 1+0 records in 00:10:19.376 1+0 records out 00:10:19.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692218 s, 5.9 MB/s 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:19.376 18:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd0", 00:10:19.636 "bdev_name": "Nvme0n1p1" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd1", 00:10:19.636 "bdev_name": "Nvme0n1p2" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd2", 00:10:19.636 "bdev_name": "Nvme1n1" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd3", 00:10:19.636 "bdev_name": "Nvme2n1" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd4", 00:10:19.636 "bdev_name": "Nvme2n2" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd5", 00:10:19.636 "bdev_name": "Nvme2n3" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd6", 00:10:19.636 "bdev_name": "Nvme3n1" 00:10:19.636 } 00:10:19.636 ]' 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd0", 00:10:19.636 "bdev_name": "Nvme0n1p1" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd1", 00:10:19.636 "bdev_name": "Nvme0n1p2" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd2", 00:10:19.636 "bdev_name": "Nvme1n1" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd3", 00:10:19.636 "bdev_name": "Nvme2n1" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd4", 00:10:19.636 "bdev_name": "Nvme2n2" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd5", 00:10:19.636 "bdev_name": "Nvme2n3" 00:10:19.636 }, 00:10:19.636 { 00:10:19.636 "nbd_device": "/dev/nbd6", 00:10:19.636 "bdev_name": "Nvme3n1" 00:10:19.636 } 00:10:19.636 ]' 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:19.636 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.204 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.463 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.727 18:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.993 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.252 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.510 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.511 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.511 18:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:21.769 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:21.769 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:21.770 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:22.028 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:22.029 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:22.288 /dev/nbd0 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.288 1+0 records in 00:10:22.288 1+0 records out 00:10:22.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597741 s, 6.9 MB/s 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:22.288 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:22.547 /dev/nbd1 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.547 1+0 records in 00:10:22.547 1+0 records out 00:10:22.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676013 s, 6.1 MB/s 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:22.547 18:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:22.807 /dev/nbd10 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.807 1+0 records in 00:10:22.807 1+0 records out 00:10:22.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567402 s, 7.2 MB/s 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:22.807 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:23.065 /dev/nbd11 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.065 1+0 records in 00:10:23.065 1+0 records out 00:10:23.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792525 s, 5.2 MB/s 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:23.065 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:23.324 /dev/nbd12 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.324 1+0 records in 00:10:23.324 1+0 records out 00:10:23.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678739 s, 6.0 MB/s 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:23.324 18:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:23.583 /dev/nbd13 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.583 1+0 records in 00:10:23.583 1+0 records out 00:10:23.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840827 s, 4.9 MB/s 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:23.583 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:23.842 /dev/nbd14 00:10:23.842 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:23.842 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:23.842 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:10:23.842 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:10:23.842 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:23.842 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:23.842 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.100 1+0 records in 00:10:24.100 1+0 records out 00:10:24.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662783 s, 6.2 MB/s 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:24.100 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:24.101 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd0", 00:10:24.101 "bdev_name": "Nvme0n1p1" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd1", 00:10:24.101 "bdev_name": "Nvme0n1p2" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd10", 00:10:24.101 "bdev_name": "Nvme1n1" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd11", 00:10:24.101 "bdev_name": "Nvme2n1" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd12", 00:10:24.101 "bdev_name": "Nvme2n2" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd13", 00:10:24.101 "bdev_name": "Nvme2n3" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd14", 00:10:24.101 "bdev_name": "Nvme3n1" 00:10:24.101 } 00:10:24.101 ]' 00:10:24.101 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:24.101 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd0", 00:10:24.101 "bdev_name": "Nvme0n1p1" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd1", 00:10:24.101 "bdev_name": "Nvme0n1p2" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd10", 00:10:24.101 "bdev_name": "Nvme1n1" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd11", 00:10:24.101 "bdev_name": "Nvme2n1" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd12", 00:10:24.101 "bdev_name": "Nvme2n2" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd13", 00:10:24.101 "bdev_name": "Nvme2n3" 00:10:24.101 }, 00:10:24.101 { 00:10:24.101 "nbd_device": "/dev/nbd14", 00:10:24.101 "bdev_name": "Nvme3n1" 00:10:24.101 } 00:10:24.101 ]' 00:10:24.359 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:24.359 /dev/nbd1 00:10:24.359 /dev/nbd10 00:10:24.359 /dev/nbd11 00:10:24.359 /dev/nbd12 00:10:24.359 /dev/nbd13 00:10:24.359 /dev/nbd14' 00:10:24.359 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:24.359 /dev/nbd1 00:10:24.359 /dev/nbd10 00:10:24.359 /dev/nbd11 00:10:24.359 /dev/nbd12 00:10:24.359 /dev/nbd13 00:10:24.359 /dev/nbd14' 00:10:24.359 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:24.360 256+0 records in 00:10:24.360 256+0 records out 00:10:24.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661592 s, 158 MB/s 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:24.360 256+0 records in 00:10:24.360 256+0 records out 00:10:24.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15275 s, 6.9 MB/s 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:24.360 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:24.618 256+0 records in 00:10:24.618 256+0 records out 00:10:24.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187936 s, 5.6 MB/s 00:10:24.618 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:24.618 18:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:24.890 256+0 records in 00:10:24.890 256+0 records out 00:10:24.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185554 s, 5.7 MB/s 00:10:24.890 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:24.890 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:24.890 256+0 records in 00:10:24.890 256+0 records out 00:10:24.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167534 s, 6.3 MB/s 00:10:24.890 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:24.890 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:25.168 256+0 records in 00:10:25.168 256+0 records out 00:10:25.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178285 s, 5.9 MB/s 00:10:25.168 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:25.168 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:25.429 256+0 records in 00:10:25.429 256+0 records out 00:10:25.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151521 s, 6.9 MB/s 00:10:25.429 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:25.429 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:25.429 256+0 records in 00:10:25.429 256+0 records out 00:10:25.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168593 s, 6.2 MB/s 00:10:25.429 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:25.429 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:25.429 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:25.429 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:25.430 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:25.688 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:25.688 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.688 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:25.688 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:25.688 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:25.688 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.688 18:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.946 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:26.204 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:26.204 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.205 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.462 18:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.721 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.979 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.235 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.493 18:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:27.750 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:28.008 malloc_lvol_verify 00:10:28.008 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:28.266 874d9007-075c-4a1e-9fb8-6223665d541b 00:10:28.266 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:28.562 6e223f66-2bef-40e7-baa5-dcf3e639109b 00:10:28.562 18:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:28.821 /dev/nbd0 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:28.821 mke2fs 1.46.5 (30-Dec-2021) 00:10:28.821 Discarding device blocks: 0/4096 done 00:10:28.821 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:28.821 00:10:28.821 Allocating group tables: 0/1 done 00:10:28.821 Writing inode tables: 0/1 done 00:10:28.821 Creating journal (1024 blocks): done 00:10:28.821 Writing superblocks and filesystem accounting information: 0/1 done 00:10:28.821 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.821 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 67386 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 67386 ']' 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 67386 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67386 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67386' 00:10:29.079 killing process with pid 67386 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 67386 00:10:29.079 18:02:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 67386 00:10:30.455 18:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:10:30.455 00:10:30.455 real 0m14.200s 00:10:30.455 user 0m19.986s 00:10:30.455 sys 0m4.624s 00:10:30.455 18:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:30.455 18:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:30.455 ************************************ 00:10:30.455 END TEST bdev_nbd 00:10:30.455 ************************************ 00:10:30.455 18:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:10:30.455 18:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:10:30.455 18:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:10:30.455 skipping fio tests on NVMe due to multi-ns failures. 00:10:30.455 18:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:30.455 18:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:30.455 18:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:30.455 18:02:22 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:10:30.455 18:02:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:30.455 18:02:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.455 ************************************ 00:10:30.455 START TEST bdev_verify 00:10:30.455 ************************************ 00:10:30.455 18:02:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:30.455 [2024-05-15 18:02:22.828102] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:30.455 [2024-05-15 18:02:22.828316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67831 ] 00:10:30.712 [2024-05-15 18:02:23.002047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:30.969 [2024-05-15 18:02:23.239330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.969 [2024-05-15 18:02:23.239334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.589 Running I/O for 5 seconds... 00:10:36.873 00:10:36.873 Latency(us) 00:10:36.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.873 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x0 length 0x5e800 00:10:36.873 Nvme0n1p1 : 5.07 1376.31 5.38 0.00 0.00 92610.89 9592.09 88652.33 00:10:36.873 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x5e800 length 0x5e800 00:10:36.873 Nvme0n1p1 : 5.05 1266.61 4.95 0.00 0.00 100822.48 19899.11 103904.35 00:10:36.873 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x0 length 0x5e7ff 00:10:36.873 Nvme0n1p2 : 5.07 1375.40 5.37 0.00 0.00 92517.71 11439.01 83886.08 00:10:36.873 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:36.873 Nvme0n1p2 : 5.05 1266.16 4.95 0.00 0.00 100711.94 19779.96 101044.60 00:10:36.873 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x0 length 0xa0000 00:10:36.873 Nvme1n1 : 5.07 1374.99 5.37 0.00 0.00 92349.39 11558.17 79119.83 00:10:36.873 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0xa0000 length 0xa0000 00:10:36.873 Nvme1n1 : 5.06 1265.78 4.94 0.00 0.00 100601.15 20137.43 98184.84 00:10:36.873 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x0 length 0x80000 00:10:36.873 Nvme2n1 : 5.09 1384.35 5.41 0.00 0.00 91826.34 9175.04 78166.57 00:10:36.873 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x80000 length 0x80000 00:10:36.873 Nvme2n1 : 5.06 1265.37 4.94 0.00 0.00 100481.64 19184.17 97708.22 00:10:36.873 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x0 length 0x80000 00:10:36.873 Nvme2n2 : 5.09 1383.96 5.41 0.00 0.00 91690.52 9175.04 81979.58 00:10:36.873 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x80000 length 0x80000 00:10:36.873 Nvme2n2 : 5.06 1264.96 4.94 0.00 0.00 100352.54 19303.33 100091.35 00:10:36.873 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x0 length 0x80000 00:10:36.873 Nvme2n3 : 5.09 1383.58 5.40 0.00 0.00 91558.67 9294.20 85792.58 00:10:36.873 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x80000 length 0x80000 00:10:36.873 Nvme2n3 : 5.06 1264.54 4.94 0.00 0.00 100218.12 18350.08 102474.47 00:10:36.873 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x0 length 0x20000 00:10:36.873 Nvme3n1 : 5.09 1383.19 5.40 0.00 0.00 91447.69 9353.77 88652.33 00:10:36.873 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:36.873 Verification LBA range: start 0x20000 length 0x20000 00:10:36.873 Nvme3n1 : 5.07 1275.31 4.98 0.00 0.00 99287.34 2532.07 103904.35 00:10:36.873 =================================================================================================================== 00:10:36.873 Total : 18530.51 72.38 0.00 0.00 95987.37 2532.07 103904.35 00:10:38.252 00:10:38.252 real 0m7.741s 00:10:38.252 user 0m14.016s 00:10:38.252 sys 0m0.331s 00:10:38.252 18:02:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.252 18:02:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:38.252 ************************************ 00:10:38.252 END TEST bdev_verify 00:10:38.252 ************************************ 00:10:38.252 18:02:30 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:38.252 18:02:30 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:10:38.252 18:02:30 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.252 18:02:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:38.252 ************************************ 00:10:38.252 START TEST bdev_verify_big_io 00:10:38.252 ************************************ 00:10:38.252 18:02:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:38.252 [2024-05-15 18:02:30.605929] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:38.252 [2024-05-15 18:02:30.606080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67935 ] 00:10:38.511 [2024-05-15 18:02:30.770101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.511 [2024-05-15 18:02:31.010155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.511 [2024-05-15 18:02:31.010165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.451 Running I/O for 5 seconds... 00:10:46.023 00:10:46.023 Latency(us) 00:10:46.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.023 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x0 length 0x5e80 00:10:46.023 Nvme0n1p1 : 5.77 113.96 7.12 0.00 0.00 1076432.47 26095.24 1182031.13 00:10:46.023 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x5e80 length 0x5e80 00:10:46.023 Nvme0n1p1 : 5.79 115.99 7.25 0.00 0.00 1068846.24 30265.72 1098145.05 00:10:46.023 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x0 length 0x5e7f 00:10:46.023 Nvme0n1p2 : 5.81 119.76 7.49 0.00 0.00 1014080.75 42181.35 949437.91 00:10:46.023 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:46.023 Nvme0n1p2 : 5.80 111.63 6.98 0.00 0.00 1078908.38 72923.69 1814989.73 00:10:46.023 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x0 length 0xa000 00:10:46.023 Nvme1n1 : 5.89 111.52 6.97 0.00 0.00 1044231.05 43372.92 1502323.43 00:10:46.023 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0xa000 length 0xa000 00:10:46.023 Nvme1n1 : 5.85 120.41 7.53 0.00 0.00 970552.28 81979.58 999006.95 00:10:46.023 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x0 length 0x8000 00:10:46.023 Nvme2n1 : 5.89 113.01 7.06 0.00 0.00 1004486.36 44564.48 1532827.46 00:10:46.023 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x8000 length 0x8000 00:10:46.023 Nvme2n1 : 5.90 126.69 7.92 0.00 0.00 906743.08 49330.73 934185.89 00:10:46.023 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x0 length 0x8000 00:10:46.023 Nvme2n2 : 5.93 116.50 7.28 0.00 0.00 949247.88 45756.04 1548079.48 00:10:46.023 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x8000 length 0x8000 00:10:46.023 Nvme2n2 : 5.90 130.18 8.14 0.00 0.00 862310.56 47662.55 949437.91 00:10:46.023 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x0 length 0x8000 00:10:46.023 Nvme2n3 : 5.96 126.05 7.88 0.00 0.00 859967.50 23354.65 1578583.51 00:10:46.023 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x8000 length 0x8000 00:10:46.023 Nvme2n3 : 5.95 132.83 8.30 0.00 0.00 818692.76 43611.23 1067641.02 00:10:46.023 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x0 length 0x2000 00:10:46.023 Nvme3n1 : 6.03 145.72 9.11 0.00 0.00 725831.27 1273.48 1616713.54 00:10:46.023 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:46.023 Verification LBA range: start 0x2000 length 0x2000 00:10:46.023 Nvme3n1 : 6.00 149.13 9.32 0.00 0.00 713646.60 2412.92 1082893.03 00:10:46.023 =================================================================================================================== 00:10:46.023 Total : 1733.37 108.34 0.00 0.00 922944.75 1273.48 1814989.73 00:10:47.513 00:10:47.513 real 0m9.235s 00:10:47.513 user 0m16.993s 00:10:47.513 sys 0m0.367s 00:10:47.513 18:02:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:47.513 ************************************ 00:10:47.513 END TEST bdev_verify_big_io 00:10:47.513 ************************************ 00:10:47.513 18:02:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:47.513 18:02:39 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:47.513 18:02:39 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:47.513 18:02:39 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:47.513 18:02:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:47.513 ************************************ 00:10:47.513 START TEST bdev_write_zeroes 00:10:47.513 ************************************ 00:10:47.513 18:02:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:47.513 [2024-05-15 18:02:39.904085] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:47.513 [2024-05-15 18:02:39.904276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68058 ] 00:10:47.871 [2024-05-15 18:02:40.078671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.871 [2024-05-15 18:02:40.313803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.807 Running I/O for 1 seconds... 00:10:49.752 00:10:49.752 Latency(us) 00:10:49.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.752 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:49.752 Nvme0n1p1 : 1.02 6939.35 27.11 0.00 0.00 18358.44 11319.85 40274.85 00:10:49.752 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:49.752 Nvme0n1p2 : 1.03 6927.81 27.06 0.00 0.00 18350.77 14894.55 30742.34 00:10:49.752 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:49.752 Nvme1n1 : 1.03 6917.46 27.02 0.00 0.00 18321.13 15609.48 28716.68 00:10:49.752 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:49.752 Nvme2n1 : 1.03 6955.95 27.17 0.00 0.00 18155.93 11021.96 22997.18 00:10:49.752 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:49.752 Nvme2n2 : 1.03 6945.27 27.13 0.00 0.00 18141.39 11141.12 22282.24 00:10:49.752 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:49.752 Nvme2n3 : 1.03 6934.62 27.09 0.00 0.00 18131.71 10664.49 22520.55 00:10:49.752 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:49.752 Nvme3n1 : 1.04 6924.37 27.05 0.00 0.00 18123.45 10962.39 22282.24 00:10:49.752 =================================================================================================================== 00:10:49.752 Total : 48544.82 189.63 0.00 0.00 18225.67 10664.49 40274.85 00:10:51.154 00:10:51.154 real 0m3.445s 00:10:51.154 user 0m3.032s 00:10:51.154 sys 0m0.291s 00:10:51.154 18:02:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:51.154 18:02:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:51.154 ************************************ 00:10:51.154 END TEST bdev_write_zeroes 00:10:51.154 ************************************ 00:10:51.154 18:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.154 18:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:51.154 18:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:51.154 18:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:51.154 ************************************ 00:10:51.154 START TEST bdev_json_nonenclosed 00:10:51.154 ************************************ 00:10:51.154 18:02:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.154 [2024-05-15 18:02:43.402779] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:51.154 [2024-05-15 18:02:43.402958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68111 ] 00:10:51.154 [2024-05-15 18:02:43.566310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.414 [2024-05-15 18:02:43.810824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.414 [2024-05-15 18:02:43.810939] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:51.414 [2024-05-15 18:02:43.810980] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:51.414 [2024-05-15 18:02:43.811001] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:51.983 00:10:51.983 real 0m0.902s 00:10:51.983 user 0m0.660s 00:10:51.983 sys 0m0.135s 00:10:51.983 18:02:44 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:51.983 18:02:44 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:51.983 ************************************ 00:10:51.983 END TEST bdev_json_nonenclosed 00:10:51.983 ************************************ 00:10:51.983 18:02:44 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.983 18:02:44 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:51.983 18:02:44 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:51.983 18:02:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:51.983 ************************************ 00:10:51.983 START TEST bdev_json_nonarray 00:10:51.983 ************************************ 00:10:51.983 18:02:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.983 [2024-05-15 18:02:44.378774] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:51.983 [2024-05-15 18:02:44.378964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68142 ] 00:10:52.241 [2024-05-15 18:02:44.565275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.500 [2024-05-15 18:02:44.812892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.500 [2024-05-15 18:02:44.813074] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:52.500 [2024-05-15 18:02:44.813110] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:52.500 [2024-05-15 18:02:44.813132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.759 00:10:52.759 real 0m0.942s 00:10:52.759 user 0m0.665s 00:10:52.759 sys 0m0.169s 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:52.759 ************************************ 00:10:52.759 END TEST bdev_json_nonarray 00:10:52.759 ************************************ 00:10:52.759 18:02:45 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:10:52.759 18:02:45 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:10:52.759 18:02:45 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:52.759 18:02:45 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:52.759 18:02:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:52.759 18:02:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.759 ************************************ 00:10:52.759 START TEST bdev_gpt_uuid 00:10:52.759 ************************************ 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68172 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 68172 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 68172 ']' 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:52.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:52.759 18:02:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:53.018 [2024-05-15 18:02:45.363213] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:10:53.018 [2024-05-15 18:02:45.363385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68172 ] 00:10:53.276 [2024-05-15 18:02:45.525479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.276 [2024-05-15 18:02:45.765552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.212 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:54.212 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:10:54.212 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:54.212 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.212 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:54.470 Some configs were skipped because the RPC state that can call them passed over. 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:10:54.470 { 00:10:54.470 "name": "Nvme0n1p1", 00:10:54.470 "aliases": [ 00:10:54.470 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:54.470 ], 00:10:54.470 "product_name": "GPT Disk", 00:10:54.470 "block_size": 4096, 00:10:54.470 "num_blocks": 774144, 00:10:54.470 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:54.470 "md_size": 64, 00:10:54.470 "md_interleave": false, 00:10:54.470 "dif_type": 0, 00:10:54.470 "assigned_rate_limits": { 00:10:54.470 "rw_ios_per_sec": 0, 00:10:54.470 "rw_mbytes_per_sec": 0, 00:10:54.470 "r_mbytes_per_sec": 0, 00:10:54.470 "w_mbytes_per_sec": 0 00:10:54.470 }, 00:10:54.470 "claimed": false, 00:10:54.470 "zoned": false, 00:10:54.470 "supported_io_types": { 00:10:54.470 "read": true, 00:10:54.470 "write": true, 00:10:54.470 "unmap": true, 00:10:54.470 "write_zeroes": true, 00:10:54.470 "flush": true, 00:10:54.470 "reset": true, 00:10:54.470 "compare": true, 00:10:54.470 "compare_and_write": false, 00:10:54.470 "abort": true, 00:10:54.470 "nvme_admin": false, 00:10:54.470 "nvme_io": false 00:10:54.470 }, 00:10:54.470 "driver_specific": { 00:10:54.470 "gpt": { 00:10:54.470 "base_bdev": "Nvme0n1", 00:10:54.470 "offset_blocks": 256, 00:10:54.470 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:54.470 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:54.470 "partition_name": "SPDK_TEST_first" 00:10:54.470 } 00:10:54.470 } 00:10:54.470 } 00:10:54.470 ]' 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:10:54.470 18:02:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:10:54.729 { 00:10:54.729 "name": "Nvme0n1p2", 00:10:54.729 "aliases": [ 00:10:54.729 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:54.729 ], 00:10:54.729 "product_name": "GPT Disk", 00:10:54.729 "block_size": 4096, 00:10:54.729 "num_blocks": 774143, 00:10:54.729 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:54.729 "md_size": 64, 00:10:54.729 "md_interleave": false, 00:10:54.729 "dif_type": 0, 00:10:54.729 "assigned_rate_limits": { 00:10:54.729 "rw_ios_per_sec": 0, 00:10:54.729 "rw_mbytes_per_sec": 0, 00:10:54.729 "r_mbytes_per_sec": 0, 00:10:54.729 "w_mbytes_per_sec": 0 00:10:54.729 }, 00:10:54.729 "claimed": false, 00:10:54.729 "zoned": false, 00:10:54.729 "supported_io_types": { 00:10:54.729 "read": true, 00:10:54.729 "write": true, 00:10:54.729 "unmap": true, 00:10:54.729 "write_zeroes": true, 00:10:54.729 "flush": true, 00:10:54.729 "reset": true, 00:10:54.729 "compare": true, 00:10:54.729 "compare_and_write": false, 00:10:54.729 "abort": true, 00:10:54.729 "nvme_admin": false, 00:10:54.729 "nvme_io": false 00:10:54.729 }, 00:10:54.729 "driver_specific": { 00:10:54.729 "gpt": { 00:10:54.729 "base_bdev": "Nvme0n1", 00:10:54.729 "offset_blocks": 774400, 00:10:54.729 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:54.729 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:54.729 "partition_name": "SPDK_TEST_second" 00:10:54.729 } 00:10:54.729 } 00:10:54.729 } 00:10:54.729 ]' 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:54.729 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 68172 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 68172 ']' 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 68172 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68172 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:54.989 killing process with pid 68172 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68172' 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 68172 00:10:54.989 18:02:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 68172 00:10:56.888 00:10:56.888 real 0m4.129s 00:10:56.888 user 0m4.381s 00:10:56.888 sys 0m0.576s 00:10:56.888 18:02:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:56.888 18:02:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:56.888 ************************************ 00:10:56.888 END TEST bdev_gpt_uuid 00:10:56.888 ************************************ 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:57.147 18:02:49 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:57.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:57.664 Waiting for block devices as requested 00:10:57.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.664 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.923 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.200 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:03.200 18:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:11:03.200 18:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:11:03.200 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:03.200 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:03.200 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:03.200 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:11:03.200 18:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:03.200 00:11:03.200 real 1m5.931s 00:11:03.200 user 1m23.400s 00:11:03.200 sys 0m10.427s 00:11:03.200 18:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:03.200 18:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.200 ************************************ 00:11:03.200 END TEST blockdev_nvme_gpt 00:11:03.200 ************************************ 00:11:03.200 18:02:55 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:03.200 18:02:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:03.200 18:02:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:03.200 18:02:55 -- common/autotest_common.sh@10 -- # set +x 00:11:03.200 ************************************ 00:11:03.200 START TEST nvme 00:11:03.200 ************************************ 00:11:03.200 18:02:55 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:03.200 * Looking for test storage... 00:11:03.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:03.458 18:02:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:04.592 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.592 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.592 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.592 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:04.592 18:02:57 nvme -- nvme/nvme.sh@79 -- # uname 00:11:04.592 18:02:57 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:04.592 18:02:57 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:04.592 18:02:57 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1067 -- # stubpid=68811 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:11:04.592 Waiting for stub to ready for secondary processes... 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/68811 ]] 00:11:04.592 18:02:57 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:11:04.592 [2024-05-15 18:02:57.075397] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:11:04.592 [2024-05-15 18:02:57.075573] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:05.565 18:02:58 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:05.565 18:02:58 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/68811 ]] 00:11:05.565 18:02:58 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:11:06.129 [2024-05-15 18:02:58.361512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.130 [2024-05-15 18:02:58.626716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.130 [2024-05-15 18:02:58.626789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.130 [2024-05-15 18:02:58.626810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.388 [2024-05-15 18:02:58.645528] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:06.388 [2024-05-15 18:02:58.645598] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.388 [2024-05-15 18:02:58.655756] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:06.388 [2024-05-15 18:02:58.655944] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:06.388 [2024-05-15 18:02:58.657965] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.388 [2024-05-15 18:02:58.658218] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:06.388 [2024-05-15 18:02:58.658286] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:06.388 [2024-05-15 18:02:58.661277] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.388 [2024-05-15 18:02:58.661473] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:06.388 [2024-05-15 18:02:58.661539] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:06.388 [2024-05-15 18:02:58.663670] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:06.388 [2024-05-15 18:02:58.664055] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:06.388 [2024-05-15 18:02:58.664121] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:06.388 [2024-05-15 18:02:58.664178] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:06.388 [2024-05-15 18:02:58.664284] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:06.647 18:02:59 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:06.647 done. 00:11:06.647 18:02:59 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:11:06.647 18:02:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:06.647 18:02:59 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:11:06.647 18:02:59 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.647 18:02:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:06.647 ************************************ 00:11:06.647 START TEST nvme_reset 00:11:06.647 ************************************ 00:11:06.647 18:02:59 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:06.908 Initializing NVMe Controllers 00:11:06.908 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:06.908 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:06.908 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:06.908 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:06.908 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:06.908 00:11:06.908 real 0m0.314s 00:11:06.908 user 0m0.123s 00:11:06.908 sys 0m0.144s 00:11:06.908 18:02:59 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.908 18:02:59 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:06.908 ************************************ 00:11:06.908 END TEST nvme_reset 00:11:06.908 ************************************ 00:11:06.908 18:02:59 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:06.908 18:02:59 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:06.908 18:02:59 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.908 18:02:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:07.167 ************************************ 00:11:07.167 START TEST nvme_identify 00:11:07.167 ************************************ 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:11:07.167 18:02:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:07.167 18:02:59 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:07.167 18:02:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:07.167 18:02:59 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:07.167 18:02:59 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:07.167 18:02:59 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:07.428 [2024-05-15 18:02:59.713011] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68845 terminated unexpected 00:11:07.428 ===================================================== 00:11:07.428 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:07.428 ===================================================== 00:11:07.428 Controller Capabilities/Features 00:11:07.428 ================================ 00:11:07.428 Vendor ID: 1b36 00:11:07.428 Subsystem Vendor ID: 1af4 00:11:07.428 Serial Number: 12340 00:11:07.428 Model Number: QEMU NVMe Ctrl 00:11:07.428 Firmware Version: 8.0.0 00:11:07.428 Recommended Arb Burst: 6 00:11:07.428 IEEE OUI Identifier: 00 54 52 00:11:07.428 Multi-path I/O 00:11:07.428 May have multiple subsystem ports: No 00:11:07.428 May have multiple controllers: No 00:11:07.428 Associated with SR-IOV VF: No 00:11:07.428 Max Data Transfer Size: 524288 00:11:07.428 Max Number of Namespaces: 256 00:11:07.428 Max Number of I/O Queues: 64 00:11:07.428 NVMe Specification Version (VS): 1.4 00:11:07.428 NVMe Specification Version (Identify): 1.4 00:11:07.428 Maximum Queue Entries: 2048 00:11:07.428 Contiguous Queues Required: Yes 00:11:07.428 Arbitration Mechanisms Supported 00:11:07.428 Weighted Round Robin: Not Supported 00:11:07.428 Vendor Specific: Not Supported 00:11:07.428 Reset Timeout: 7500 ms 00:11:07.428 Doorbell Stride: 4 bytes 00:11:07.428 NVM Subsystem Reset: Not Supported 00:11:07.428 Command Sets Supported 00:11:07.428 NVM Command Set: Supported 00:11:07.428 Boot Partition: Not Supported 00:11:07.428 Memory Page Size Minimum: 4096 bytes 00:11:07.428 Memory Page Size Maximum: 65536 bytes 00:11:07.428 Persistent Memory Region: Not Supported 00:11:07.428 Optional Asynchronous Events Supported 00:11:07.428 Namespace Attribute Notices: Supported 00:11:07.428 Firmware Activation Notices: Not Supported 00:11:07.428 ANA Change Notices: Not Supported 00:11:07.428 PLE Aggregate Log Change Notices: Not Supported 00:11:07.428 LBA Status Info Alert Notices: Not Supported 00:11:07.428 EGE Aggregate Log Change Notices: Not Supported 00:11:07.428 Normal NVM Subsystem Shutdown event: Not Supported 00:11:07.428 Zone Descriptor Change Notices: Not Supported 00:11:07.428 Discovery Log Change Notices: Not Supported 00:11:07.428 Controller Attributes 00:11:07.428 128-bit Host Identifier: Not Supported 00:11:07.428 Non-Operational Permissive Mode: Not Supported 00:11:07.428 NVM Sets: Not Supported 00:11:07.428 Read Recovery Levels: Not Supported 00:11:07.428 Endurance Groups: Not Supported 00:11:07.428 Predictable Latency Mode: Not Supported 00:11:07.428 Traffic Based Keep ALive: Not Supported 00:11:07.428 Namespace Granularity: Not Supported 00:11:07.428 SQ Associations: Not Supported 00:11:07.428 UUID List: Not Supported 00:11:07.428 Multi-Domain Subsystem: Not Supported 00:11:07.428 Fixed Capacity Management: Not Supported 00:11:07.428 Variable Capacity Management: Not Supported 00:11:07.428 Delete Endurance Group: Not Supported 00:11:07.428 Delete NVM Set: Not Supported 00:11:07.428 Extended LBA Formats Supported: Supported 00:11:07.428 Flexible Data Placement Supported: Not Supported 00:11:07.428 00:11:07.428 Controller Memory Buffer Support 00:11:07.428 ================================ 00:11:07.428 Supported: No 00:11:07.428 00:11:07.428 Persistent Memory Region Support 00:11:07.428 ================================ 00:11:07.428 Supported: No 00:11:07.428 00:11:07.428 Admin Command Set Attributes 00:11:07.428 ============================ 00:11:07.428 Security Send/Receive: Not Supported 00:11:07.428 Format NVM: Supported 00:11:07.428 Firmware Activate/Download: Not Supported 00:11:07.428 Namespace Management: Supported 00:11:07.428 Device Self-Test: Not Supported 00:11:07.428 Directives: Supported 00:11:07.428 NVMe-MI: Not Supported 00:11:07.428 Virtualization Management: Not Supported 00:11:07.428 Doorbell Buffer Config: Supported 00:11:07.428 Get LBA Status Capability: Not Supported 00:11:07.428 Command & Feature Lockdown Capability: Not Supported 00:11:07.428 Abort Command Limit: 4 00:11:07.428 Async Event Request Limit: 4 00:11:07.428 Number of Firmware Slots: N/A 00:11:07.428 Firmware Slot 1 Read-Only: N/A 00:11:07.428 Firmware Activation Without Reset: N/A 00:11:07.428 Multiple Update Detection Support: N/A 00:11:07.428 Firmware Update Granularity: No Information Provided 00:11:07.428 Per-Namespace SMART Log: Yes 00:11:07.428 Asymmetric Namespace Access Log Page: Not Supported 00:11:07.428 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:07.428 Command Effects Log Page: Supported 00:11:07.428 Get Log Page Extended Data: Supported 00:11:07.428 Telemetry Log Pages: Not Supported 00:11:07.428 Persistent Event Log Pages: Not Supported 00:11:07.428 Supported Log Pages Log Page: May Support 00:11:07.428 Commands Supported & Effects Log Page: Not Supported 00:11:07.428 Feature Identifiers & Effects Log Page:May Support 00:11:07.428 NVMe-MI Commands & Effects Log Page: May Support 00:11:07.428 Data Area 4 for Telemetry Log: Not Supported 00:11:07.428 Error Log Page Entries Supported: 1 00:11:07.428 Keep Alive: Not Supported 00:11:07.428 00:11:07.428 NVM Command Set Attributes 00:11:07.428 ========================== 00:11:07.428 Submission Queue Entry Size 00:11:07.428 Max: 64 00:11:07.428 Min: 64 00:11:07.428 Completion Queue Entry Size 00:11:07.428 Max: 16 00:11:07.428 Min: 16 00:11:07.428 Number of Namespaces: 256 00:11:07.428 Compare Command: Supported 00:11:07.428 Write Uncorrectable Command: Not Supported 00:11:07.429 Dataset Management Command: Supported 00:11:07.429 Write Zeroes Command: Supported 00:11:07.429 Set Features Save Field: Supported 00:11:07.429 Reservations: Not Supported 00:11:07.429 Timestamp: Supported 00:11:07.429 Copy: Supported 00:11:07.429 Volatile Write Cache: Present 00:11:07.429 Atomic Write Unit (Normal): 1 00:11:07.429 Atomic Write Unit (PFail): 1 00:11:07.429 Atomic Compare & Write Unit: 1 00:11:07.429 Fused Compare & Write: Not Supported 00:11:07.429 Scatter-Gather List 00:11:07.429 SGL Command Set: Supported 00:11:07.429 SGL Keyed: Not Supported 00:11:07.429 SGL Bit Bucket Descriptor: Not Supported 00:11:07.429 SGL Metadata Pointer: Not Supported 00:11:07.429 Oversized SGL: Not Supported 00:11:07.429 SGL Metadata Address: Not Supported 00:11:07.429 SGL Offset: Not Supported 00:11:07.429 Transport SGL Data Block: Not Supported 00:11:07.429 Replay Protected Memory Block: Not Supported 00:11:07.429 00:11:07.429 Firmware Slot Information 00:11:07.429 ========================= 00:11:07.429 Active slot: 1 00:11:07.429 Slot 1 Firmware Revision: 1.0 00:11:07.429 00:11:07.429 00:11:07.429 Commands Supported and Effects 00:11:07.429 ============================== 00:11:07.429 Admin Commands 00:11:07.429 -------------- 00:11:07.429 Delete I/O Submission Queue (00h): Supported 00:11:07.429 Create I/O Submission Queue (01h): Supported 00:11:07.429 Get Log Page (02h): Supported 00:11:07.429 Delete I/O Completion Queue (04h): Supported 00:11:07.429 Create I/O Completion Queue (05h): Supported 00:11:07.429 Identify (06h): Supported 00:11:07.429 Abort (08h): Supported 00:11:07.429 Set Features (09h): Supported 00:11:07.429 Get Features (0Ah): Supported 00:11:07.429 Asynchronous Event Request (0Ch): Supported 00:11:07.429 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:07.429 Directive Send (19h): Supported 00:11:07.429 Directive Receive (1Ah): Supported 00:11:07.429 Virtualization Management (1Ch): Supported 00:11:07.429 Doorbell Buffer Config (7Ch): Supported 00:11:07.429 Format NVM (80h): Supported LBA-Change 00:11:07.429 I/O Commands 00:11:07.429 ------------ 00:11:07.429 Flush (00h): Supported LBA-Change 00:11:07.429 Write (01h): Supported LBA-Change 00:11:07.429 Read (02h): Supported 00:11:07.429 Compare (05h): Supported 00:11:07.429 Write Zeroes (08h): Supported LBA-Change 00:11:07.429 Dataset Management (09h): Supported LBA-Change 00:11:07.429 Unknown (0Ch): Supported 00:11:07.429 Unknown (12h): Supported 00:11:07.429 Copy (19h): Supported LBA-Change 00:11:07.429 Unknown (1Dh): Supported LBA-Change 00:11:07.429 00:11:07.429 Error Log 00:11:07.429 ========= 00:11:07.429 00:11:07.429 Arbitration 00:11:07.429 =========== 00:11:07.429 Arbitration Burst: no limit 00:11:07.429 00:11:07.429 Power Management 00:11:07.429 ================ 00:11:07.429 Number of Power States: 1 00:11:07.429 Current Power State: Power State #0 00:11:07.429 Power State #0: 00:11:07.429 Max Power: 25.00 W 00:11:07.429 Non-Operational State: Operational 00:11:07.429 Entry Latency: 16 microseconds 00:11:07.429 Exit Latency: 4 microseconds 00:11:07.429 Relative Read Throughput: 0 00:11:07.429 Relative Read Latency: 0 00:11:07.429 Relative Write Throughput: 0 00:11:07.429 Relative Write Latency: 0 00:11:07.429 Idle Power[2024-05-15 18:02:59.714508] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68845 terminated unexpected 00:11:07.429 : Not Reported 00:11:07.429 Active Power: Not Reported 00:11:07.429 Non-Operational Permissive Mode: Not Supported 00:11:07.429 00:11:07.429 Health Information 00:11:07.429 ================== 00:11:07.429 Critical Warnings: 00:11:07.429 Available Spare Space: OK 00:11:07.429 Temperature: OK 00:11:07.429 Device Reliability: OK 00:11:07.429 Read Only: No 00:11:07.429 Volatile Memory Backup: OK 00:11:07.429 Current Temperature: 323 Kelvin (50 Celsius) 00:11:07.429 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:07.429 Available Spare: 0% 00:11:07.429 Available Spare Threshold: 0% 00:11:07.429 Life Percentage Used: 0% 00:11:07.429 Data Units Read: 1049 00:11:07.429 Data Units Written: 882 00:11:07.429 Host Read Commands: 49137 00:11:07.429 Host Write Commands: 47639 00:11:07.429 Controller Busy Time: 0 minutes 00:11:07.429 Power Cycles: 0 00:11:07.429 Power On Hours: 0 hours 00:11:07.429 Unsafe Shutdowns: 0 00:11:07.429 Unrecoverable Media Errors: 0 00:11:07.429 Lifetime Error Log Entries: 0 00:11:07.429 Warning Temperature Time: 0 minutes 00:11:07.429 Critical Temperature Time: 0 minutes 00:11:07.429 00:11:07.429 Number of Queues 00:11:07.429 ================ 00:11:07.429 Number of I/O Submission Queues: 64 00:11:07.429 Number of I/O Completion Queues: 64 00:11:07.429 00:11:07.429 ZNS Specific Controller Data 00:11:07.429 ============================ 00:11:07.429 Zone Append Size Limit: 0 00:11:07.429 00:11:07.429 00:11:07.429 Active Namespaces 00:11:07.429 ================= 00:11:07.429 Namespace ID:1 00:11:07.429 Error Recovery Timeout: Unlimited 00:11:07.429 Command Set Identifier: NVM (00h) 00:11:07.429 Deallocate: Supported 00:11:07.429 Deallocated/Unwritten Error: Supported 00:11:07.429 Deallocated Read Value: All 0x00 00:11:07.429 Deallocate in Write Zeroes: Not Supported 00:11:07.429 Deallocated Guard Field: 0xFFFF 00:11:07.429 Flush: Supported 00:11:07.429 Reservation: Not Supported 00:11:07.429 Metadata Transferred as: Separate Metadata Buffer 00:11:07.429 Namespace Sharing Capabilities: Private 00:11:07.429 Size (in LBAs): 1548666 (5GiB) 00:11:07.429 Capacity (in LBAs): 1548666 (5GiB) 00:11:07.429 Utilization (in LBAs): 1548666 (5GiB) 00:11:07.429 Thin Provisioning: Not Supported 00:11:07.429 Per-NS Atomic Units: No 00:11:07.429 Maximum Single Source Range Length: 128 00:11:07.429 Maximum Copy Length: 128 00:11:07.429 Maximum Source Range Count: 128 00:11:07.429 NGUID/EUI64 Never Reused: No 00:11:07.429 Namespace Write Protected: No 00:11:07.430 Number of LBA Formats: 8 00:11:07.430 Current LBA Format: LBA Format #07 00:11:07.430 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.430 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.430 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.430 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.430 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.430 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.430 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:07.430 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.430 00:11:07.430 ===================================================== 00:11:07.430 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:07.430 ===================================================== 00:11:07.430 Controller Capabilities/Features 00:11:07.430 ================================ 00:11:07.430 Vendor ID: 1b36 00:11:07.430 Subsystem Vendor ID: 1af4 00:11:07.430 Serial Number: 12341 00:11:07.430 Model Number: QEMU NVMe Ctrl 00:11:07.430 Firmware Version: 8.0.0 00:11:07.430 Recommended Arb Burst: 6 00:11:07.430 IEEE OUI Identifier: 00 54 52 00:11:07.430 Multi-path I/O 00:11:07.430 May have multiple subsystem ports: No 00:11:07.430 May have multiple controllers: No 00:11:07.430 Associated with SR-IOV VF: No 00:11:07.430 Max Data Transfer Size: 524288 00:11:07.430 Max Number of Namespaces: 256 00:11:07.430 Max Number of I/O Queues: 64 00:11:07.430 NVMe Specification Version (VS): 1.4 00:11:07.430 NVMe Specification Version (Identify): 1.4 00:11:07.430 Maximum Queue Entries: 2048 00:11:07.430 Contiguous Queues Required: Yes 00:11:07.430 Arbitration Mechanisms Supported 00:11:07.430 Weighted Round Robin: Not Supported 00:11:07.430 Vendor Specific: Not Supported 00:11:07.430 Reset Timeout: 7500 ms 00:11:07.430 Doorbell Stride: 4 bytes 00:11:07.430 NVM Subsystem Reset: Not Supported 00:11:07.430 Command Sets Supported 00:11:07.430 NVM Command Set: Supported 00:11:07.430 Boot Partition: Not Supported 00:11:07.430 Memory Page Size Minimum: 4096 bytes 00:11:07.430 Memory Page Size Maximum: 65536 bytes 00:11:07.430 Persistent Memory Region: Not Supported 00:11:07.430 Optional Asynchronous Events Supported 00:11:07.430 Namespace Attribute Notices: Supported 00:11:07.430 Firmware Activation Notices: Not Supported 00:11:07.430 ANA Change Notices: Not Supported 00:11:07.430 PLE Aggregate Log Change Notices: Not Supported 00:11:07.430 LBA Status Info Alert Notices: Not Supported 00:11:07.430 EGE Aggregate Log Change Notices: Not Supported 00:11:07.430 Normal NVM Subsystem Shutdown event: Not Supported 00:11:07.430 Zone Descriptor Change Notices: Not Supported 00:11:07.430 Discovery Log Change Notices: Not Supported 00:11:07.430 Controller Attributes 00:11:07.430 128-bit Host Identifier: Not Supported 00:11:07.430 Non-Operational Permissive Mode: Not Supported 00:11:07.430 NVM Sets: Not Supported 00:11:07.430 Read Recovery Levels: Not Supported 00:11:07.430 Endurance Groups: Not Supported 00:11:07.430 Predictable Latency Mode: Not Supported 00:11:07.430 Traffic Based Keep ALive: Not Supported 00:11:07.430 Namespace Granularity: Not Supported 00:11:07.430 SQ Associations: Not Supported 00:11:07.430 UUID List: Not Supported 00:11:07.430 Multi-Domain Subsystem: Not Supported 00:11:07.430 Fixed Capacity Management: Not Supported 00:11:07.430 Variable Capacity Management: Not Supported 00:11:07.430 Delete Endurance Group: Not Supported 00:11:07.430 Delete NVM Set: Not Supported 00:11:07.430 Extended LBA Formats Supported: Supported 00:11:07.430 Flexible Data Placement Supported: Not Supported 00:11:07.430 00:11:07.430 Controller Memory Buffer Support 00:11:07.430 ================================ 00:11:07.430 Supported: No 00:11:07.430 00:11:07.430 Persistent Memory Region Support 00:11:07.430 ================================ 00:11:07.430 Supported: No 00:11:07.430 00:11:07.430 Admin Command Set Attributes 00:11:07.430 ============================ 00:11:07.430 Security Send/Receive: Not Supported 00:11:07.430 Format NVM: Supported 00:11:07.430 Firmware Activate/Download: Not Supported 00:11:07.430 Namespace Management: Supported 00:11:07.430 Device Self-Test: Not Supported 00:11:07.430 Directives: Supported 00:11:07.430 NVMe-MI: Not Supported 00:11:07.430 Virtualization Management: Not Supported 00:11:07.430 Doorbell Buffer Config: Supported 00:11:07.430 Get LBA Status Capability: Not Supported 00:11:07.430 Command & Feature Lockdown Capability: Not Supported 00:11:07.430 Abort Command Limit: 4 00:11:07.430 Async Event Request Limit: 4 00:11:07.430 Number of Firmware Slots: N/A 00:11:07.430 Firmware Slot 1 Read-Only: N/A 00:11:07.430 Firmware Activation Without Reset: N/A 00:11:07.430 Multiple Update Detection Support: N/A 00:11:07.430 Firmware Update Granularity: No Information Provided 00:11:07.430 Per-Namespace SMART Log: Yes 00:11:07.430 Asymmetric Namespace Access Log Page: Not Supported 00:11:07.430 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:07.430 Command Effects Log Page: Supported 00:11:07.430 Get Log Page Extended Data: Supported 00:11:07.430 Telemetry Log Pages: Not Supported 00:11:07.430 Persistent Event Log Pages: Not Supported 00:11:07.430 Supported Log Pages Log Page: May Support 00:11:07.430 Commands Supported & Effects Log Page: Not Supported 00:11:07.430 Feature Identifiers & Effects Log Page:May Support 00:11:07.430 NVMe-MI Commands & Effects Log Page: May Support 00:11:07.430 Data Area 4 for Telemetry Log: Not Supported 00:11:07.430 Error Log Page Entries Supported: 1 00:11:07.430 Keep Alive: Not Supported 00:11:07.430 00:11:07.430 NVM Command Set Attributes 00:11:07.430 ========================== 00:11:07.430 Submission Queue Entry Size 00:11:07.430 Max: 64 00:11:07.430 Min: 64 00:11:07.430 Completion Queue Entry Size 00:11:07.430 Max: 16 00:11:07.430 Min: 16 00:11:07.430 Number of Namespaces: 256 00:11:07.430 Compare Command: Supported 00:11:07.430 Write Uncorrectable Command: Not Supported 00:11:07.430 Dataset Management Command: Supported 00:11:07.430 Write Zeroes Command: Supported 00:11:07.430 Set Features Save Field: Supported 00:11:07.430 Reservations: Not Supported 00:11:07.430 Timestamp: Supported 00:11:07.430 Copy: Supported 00:11:07.430 Volatile Write Cache: Present 00:11:07.430 Atomic Write Unit (Normal): 1 00:11:07.430 Atomic Write Unit (PFail): 1 00:11:07.430 Atomic Compare & Write Unit: 1 00:11:07.430 Fused Compare & Write: Not Supported 00:11:07.430 Scatter-Gather List 00:11:07.430 SGL Command Set: Supported 00:11:07.430 SGL Keyed: Not Supported 00:11:07.430 SGL Bit Bucket Descriptor: Not Supported 00:11:07.430 SGL Metadata Pointer: Not Supported 00:11:07.430 Oversized SGL: Not Supported 00:11:07.430 SGL Metadata Address: Not Supported 00:11:07.430 SGL Offset: Not Supported 00:11:07.430 Transport SGL Data Block: Not Supported 00:11:07.431 Replay Protected Memory Block: Not Supported 00:11:07.431 00:11:07.431 Firmware Slot Information 00:11:07.431 ========================= 00:11:07.431 Active slot: 1 00:11:07.431 Slot 1 Firmware Revision: 1.0 00:11:07.431 00:11:07.431 00:11:07.431 Commands Supported and Effects 00:11:07.431 ============================== 00:11:07.431 Admin Commands 00:11:07.431 -------------- 00:11:07.431 Delete I/O Submission Queue (00h): Supported 00:11:07.431 Create I/O Submission Queue (01h): Supported 00:11:07.431 Get Log Page (02h): Supported 00:11:07.431 Delete I/O Completion Queue (04h): Supported 00:11:07.431 Create I/O Completion Queue (05h): Supported 00:11:07.431 Identify (06h): Supported 00:11:07.431 Abort (08h): Supported 00:11:07.431 Set Features (09h): Supported 00:11:07.431 Get Features (0Ah): Supported 00:11:07.431 Asynchronous Event Request (0Ch): Supported 00:11:07.431 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:07.431 Directive Send (19h): Supported 00:11:07.431 Directive Receive (1Ah): Supported 00:11:07.431 Virtualization Management (1Ch): Supported 00:11:07.431 Doorbell Buffer Config (7Ch): Supported 00:11:07.431 Format NVM (80h): Supported LBA-Change 00:11:07.431 I/O Commands 00:11:07.431 ------------ 00:11:07.431 Flush (00h): Supported LBA-Change 00:11:07.431 Write (01h): Supported LBA-Change 00:11:07.431 Read (02h): Supported 00:11:07.431 Compare (05h): Supported 00:11:07.431 Write Zeroes (08h): Supported LBA-Change 00:11:07.431 Dataset Management (09h): Supported LBA-Change 00:11:07.431 Unknown (0Ch): Supported 00:11:07.431 Unknown (12h): Supported 00:11:07.431 Copy (19h): Supported LBA-Change 00:11:07.431 Unknown (1Dh): Supported LBA-Change 00:11:07.431 00:11:07.431 Error Log 00:11:07.431 ========= 00:11:07.431 00:11:07.431 Arbitration 00:11:07.431 =========== 00:11:07.431 Arbitration Burst: no limit 00:11:07.431 00:11:07.431 Power Management 00:11:07.431 ================ 00:11:07.431 Number of Power States: 1 00:11:07.431 Current Power State: Power State #0 00:11:07.431 Power State #0: 00:11:07.431 Max Power: 25.00 W 00:11:07.431 Non-Operational State: Operational 00:11:07.431 Entry Latency: 16 microseconds 00:11:07.431 Exit Latency: 4 microseconds 00:11:07.431 Relative Read Throughput: 0 00:11:07.431 Relative Read Latency: 0 00:11:07.431 Relative Write Throughput: 0 00:11:07.431 Relative Write Latency: 0 00:11:07.431 Idle Power: Not Reported 00:11:07.431 Active Power: Not Reported 00:11:07.431 Non-Operational Permissive Mode: Not Supported 00:11:07.431 00:11:07.431 Health Information 00:11:07.431 ================== 00:11:07.431 Critical Warnings: 00:11:07.431 Available Spare Space: OK 00:11:07.431 Temperature: OK 00:11:07.431 Device Reliability: OK 00:11:07.431 Read Only: No 00:11:07.431 Volatile Memory Backup: OK 00:11:07.431 Current Temperature: 323 Kelvin (50 Celsius) 00:11:07.431 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:07.431 Available Spare: 0% 00:11:07.431 Available Spare Threshold: 0% 00:11:07.431 Life Percentage Used: 0% 00:11:07.431 Data Units Read: 767 00:11:07.431 Data Units Written: 617 00:11:07.431 Host Read Commands: 34793 00:11:07.431 Host Write Commands: 32530 00:11:07.431 Controller Busy Time: 0 minutes 00:11:07.431 Power Cycles: 0 00:11:07.431 Power On Hours: 0 hours 00:11:07.431 Unsafe Shutdowns: 0 00:11:07.431 Unrecoverable Media Errors: 0 00:11:07.431 Lifetime Error Log Entries: 0 00:11:07.431 Warning Temperature Time: 0 minutes 00:11:07.431 Critical Temperature Time: 0 minutes 00:11:07.431 00:11:07.431 Number of Queues 00:11:07.431 ================ 00:11:07.431 Number of I/O Submission Queues: 64 00:11:07.431 Number of I/O Completion Queues: 64 00:11:07.431 00:11:07.431 ZNS Specific Controller Data 00:11:07.431 ============================ 00:11:07.431 Zone Append Size Limit: 0 00:11:07.431 00:11:07.431 00:11:07.431 Active Namespaces 00:11:07.431 ================= 00:11:07.431 Namespace ID:1 00:11:07.431 Error Recovery Timeout: Unlimited 00:11:07.431 Command Set Identifier: [2024-05-15 18:02:59.715582] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68845 terminated unexpected 00:11:07.431 NVM (00h) 00:11:07.431 Deallocate: Supported 00:11:07.431 Deallocated/Unwritten Error: Supported 00:11:07.431 Deallocated Read Value: All 0x00 00:11:07.431 Deallocate in Write Zeroes: Not Supported 00:11:07.431 Deallocated Guard Field: 0xFFFF 00:11:07.431 Flush: Supported 00:11:07.431 Reservation: Not Supported 00:11:07.431 Namespace Sharing Capabilities: Private 00:11:07.431 Size (in LBAs): 1310720 (5GiB) 00:11:07.431 Capacity (in LBAs): 1310720 (5GiB) 00:11:07.431 Utilization (in LBAs): 1310720 (5GiB) 00:11:07.431 Thin Provisioning: Not Supported 00:11:07.431 Per-NS Atomic Units: No 00:11:07.431 Maximum Single Source Range Length: 128 00:11:07.431 Maximum Copy Length: 128 00:11:07.431 Maximum Source Range Count: 128 00:11:07.431 NGUID/EUI64 Never Reused: No 00:11:07.431 Namespace Write Protected: No 00:11:07.431 Number of LBA Formats: 8 00:11:07.431 Current LBA Format: LBA Format #04 00:11:07.431 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.431 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.431 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.431 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.431 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.431 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.431 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:07.431 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.431 00:11:07.431 ===================================================== 00:11:07.431 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:07.431 ===================================================== 00:11:07.431 Controller Capabilities/Features 00:11:07.431 ================================ 00:11:07.431 Vendor ID: 1b36 00:11:07.431 Subsystem Vendor ID: 1af4 00:11:07.431 Serial Number: 12343 00:11:07.431 Model Number: QEMU NVMe Ctrl 00:11:07.431 Firmware Version: 8.0.0 00:11:07.431 Recommended Arb Burst: 6 00:11:07.431 IEEE OUI Identifier: 00 54 52 00:11:07.431 Multi-path I/O 00:11:07.431 May have multiple subsystem ports: No 00:11:07.432 May have multiple controllers: Yes 00:11:07.432 Associated with SR-IOV VF: No 00:11:07.432 Max Data Transfer Size: 524288 00:11:07.432 Max Number of Namespaces: 256 00:11:07.432 Max Number of I/O Queues: 64 00:11:07.432 NVMe Specification Version (VS): 1.4 00:11:07.432 NVMe Specification Version (Identify): 1.4 00:11:07.432 Maximum Queue Entries: 2048 00:11:07.432 Contiguous Queues Required: Yes 00:11:07.432 Arbitration Mechanisms Supported 00:11:07.432 Weighted Round Robin: Not Supported 00:11:07.432 Vendor Specific: Not Supported 00:11:07.432 Reset Timeout: 7500 ms 00:11:07.432 Doorbell Stride: 4 bytes 00:11:07.432 NVM Subsystem Reset: Not Supported 00:11:07.432 Command Sets Supported 00:11:07.432 NVM Command Set: Supported 00:11:07.432 Boot Partition: Not Supported 00:11:07.432 Memory Page Size Minimum: 4096 bytes 00:11:07.432 Memory Page Size Maximum: 65536 bytes 00:11:07.432 Persistent Memory Region: Not Supported 00:11:07.432 Optional Asynchronous Events Supported 00:11:07.432 Namespace Attribute Notices: Supported 00:11:07.432 Firmware Activation Notices: Not Supported 00:11:07.432 ANA Change Notices: Not Supported 00:11:07.432 PLE Aggregate Log Change Notices: Not Supported 00:11:07.432 LBA Status Info Alert Notices: Not Supported 00:11:07.432 EGE Aggregate Log Change Notices: Not Supported 00:11:07.432 Normal NVM Subsystem Shutdown event: Not Supported 00:11:07.432 Zone Descriptor Change Notices: Not Supported 00:11:07.432 Discovery Log Change Notices: Not Supported 00:11:07.432 Controller Attributes 00:11:07.432 128-bit Host Identifier: Not Supported 00:11:07.432 Non-Operational Permissive Mode: Not Supported 00:11:07.432 NVM Sets: Not Supported 00:11:07.432 Read Recovery Levels: Not Supported 00:11:07.432 Endurance Groups: Supported 00:11:07.432 Predictable Latency Mode: Not Supported 00:11:07.432 Traffic Based Keep ALive: Not Supported 00:11:07.432 Namespace Granularity: Not Supported 00:11:07.432 SQ Associations: Not Supported 00:11:07.432 UUID List: Not Supported 00:11:07.432 Multi-Domain Subsystem: Not Supported 00:11:07.432 Fixed Capacity Management: Not Supported 00:11:07.432 Variable Capacity Management: Not Supported 00:11:07.432 Delete Endurance Group: Not Supported 00:11:07.432 Delete NVM Set: Not Supported 00:11:07.432 Extended LBA Formats Supported: Supported 00:11:07.432 Flexible Data Placement Supported: Supported 00:11:07.432 00:11:07.432 Controller Memory Buffer Support 00:11:07.432 ================================ 00:11:07.432 Supported: No 00:11:07.432 00:11:07.432 Persistent Memory Region Support 00:11:07.432 ================================ 00:11:07.432 Supported: No 00:11:07.432 00:11:07.432 Admin Command Set Attributes 00:11:07.432 ============================ 00:11:07.432 Security Send/Receive: Not Supported 00:11:07.432 Format NVM: Supported 00:11:07.432 Firmware Activate/Download: Not Supported 00:11:07.432 Namespace Management: Supported 00:11:07.432 Device Self-Test: Not Supported 00:11:07.432 Directives: Supported 00:11:07.432 NVMe-MI: Not Supported 00:11:07.432 Virtualization Management: Not Supported 00:11:07.432 Doorbell Buffer Config: Supported 00:11:07.432 Get LBA Status Capability: Not Supported 00:11:07.432 Command & Feature Lockdown Capability: Not Supported 00:11:07.432 Abort Command Limit: 4 00:11:07.432 Async Event Request Limit: 4 00:11:07.432 Number of Firmware Slots: N/A 00:11:07.432 Firmware Slot 1 Read-Only: N/A 00:11:07.432 Firmware Activation Without Reset: N/A 00:11:07.432 Multiple Update Detection Support: N/A 00:11:07.432 Firmware Update Granularity: No Information Provided 00:11:07.432 Per-Namespace SMART Log: Yes 00:11:07.432 Asymmetric Namespace Access Log Page: Not Supported 00:11:07.432 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:07.432 Command Effects Log Page: Supported 00:11:07.432 Get Log Page Extended Data: Supported 00:11:07.432 Telemetry Log Pages: Not Supported 00:11:07.432 Persistent Event Log Pages: Not Supported 00:11:07.432 Supported Log Pages Log Page: May Support 00:11:07.432 Commands Supported & Effects Log Page: Not Supported 00:11:07.432 Feature Identifiers & Effects Log Page:May Support 00:11:07.432 NVMe-MI Commands & Effects Log Page: May Support 00:11:07.432 Data Area 4 for Telemetry Log: Not Supported 00:11:07.432 Error Log Page Entries Supported: 1 00:11:07.432 Keep Alive: Not Supported 00:11:07.432 00:11:07.432 NVM Command Set Attributes 00:11:07.432 ========================== 00:11:07.432 Submission Queue Entry Size 00:11:07.432 Max: 64 00:11:07.432 Min: 64 00:11:07.432 Completion Queue Entry Size 00:11:07.432 Max: 16 00:11:07.432 Min: 16 00:11:07.432 Number of Namespaces: 256 00:11:07.432 Compare Command: Supported 00:11:07.432 Write Uncorrectable Command: Not Supported 00:11:07.432 Dataset Management Command: Supported 00:11:07.432 Write Zeroes Command: Supported 00:11:07.432 Set Features Save Field: Supported 00:11:07.432 Reservations: Not Supported 00:11:07.432 Timestamp: Supported 00:11:07.432 Copy: Supported 00:11:07.432 Volatile Write Cache: Present 00:11:07.432 Atomic Write Unit (Normal): 1 00:11:07.432 Atomic Write Unit (PFail): 1 00:11:07.432 Atomic Compare & Write Unit: 1 00:11:07.432 Fused Compare & Write: Not Supported 00:11:07.432 Scatter-Gather List 00:11:07.432 SGL Command Set: Supported 00:11:07.432 SGL Keyed: Not Supported 00:11:07.432 SGL Bit Bucket Descriptor: Not Supported 00:11:07.432 SGL Metadata Pointer: Not Supported 00:11:07.432 Oversized SGL: Not Supported 00:11:07.432 SGL Metadata Address: Not Supported 00:11:07.432 SGL Offset: Not Supported 00:11:07.432 Transport SGL Data Block: Not Supported 00:11:07.432 Replay Protected Memory Block: Not Supported 00:11:07.432 00:11:07.432 Firmware Slot Information 00:11:07.432 ========================= 00:11:07.432 Active slot: 1 00:11:07.432 Slot 1 Firmware Revision: 1.0 00:11:07.432 00:11:07.432 00:11:07.432 Commands Supported and Effects 00:11:07.432 ============================== 00:11:07.432 Admin Commands 00:11:07.432 -------------- 00:11:07.432 Delete I/O Submission Queue (00h): Supported 00:11:07.432 Create I/O Submission Queue (01h): Supported 00:11:07.432 Get Log Page (02h): Supported 00:11:07.432 Delete I/O Completion Queue (04h): Supported 00:11:07.432 Create I/O Completion Queue (05h): Supported 00:11:07.432 Identify (06h): Supported 00:11:07.432 Abort (08h): Supported 00:11:07.432 Set Features (09h): Supported 00:11:07.432 Get Features (0Ah): Supported 00:11:07.433 Asynchronous Event Request (0Ch): Supported 00:11:07.433 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:07.433 Directive Send (19h): Supported 00:11:07.433 Directive Receive (1Ah): Supported 00:11:07.433 Virtualization Management (1Ch): Supported 00:11:07.433 Doorbell Buffer Config (7Ch): Supported 00:11:07.433 Format NVM (80h): Supported LBA-Change 00:11:07.433 I/O Commands 00:11:07.433 ------------ 00:11:07.433 Flush (00h): Supported LBA-Change 00:11:07.433 Write (01h): Supported LBA-Change 00:11:07.433 Read (02h): Supported 00:11:07.433 Compare (05h): Supported 00:11:07.433 Write Zeroes (08h): Supported LBA-Change 00:11:07.433 Dataset Management (09h): Supported LBA-Change 00:11:07.433 Unknown (0Ch): Supported 00:11:07.433 Unknown (12h): Supported 00:11:07.433 Copy (19h): Supported LBA-Change 00:11:07.433 Unknown (1Dh): Supported LBA-Change 00:11:07.433 00:11:07.433 Error Log 00:11:07.433 ========= 00:11:07.433 00:11:07.433 Arbitration 00:11:07.433 =========== 00:11:07.433 Arbitration Burst: no limit 00:11:07.433 00:11:07.433 Power Management 00:11:07.433 ================ 00:11:07.433 Number of Power States: 1 00:11:07.433 Current Power State: Power State #0 00:11:07.433 Power State #0: 00:11:07.433 Max Power: 25.00 W 00:11:07.433 Non-Operational State: Operational 00:11:07.433 Entry Latency: 16 microseconds 00:11:07.433 Exit Latency: 4 microseconds 00:11:07.433 Relative Read Throughput: 0 00:11:07.433 Relative Read Latency: 0 00:11:07.433 Relative Write Throughput: 0 00:11:07.433 Relative Write Latency: 0 00:11:07.433 Idle Power: Not Reported 00:11:07.433 Active Power: Not Reported 00:11:07.433 Non-Operational Permissive Mode: Not Supported 00:11:07.433 00:11:07.433 Health Information 00:11:07.433 ================== 00:11:07.433 Critical Warnings: 00:11:07.433 Available Spare Space: OK 00:11:07.433 Temperature: OK 00:11:07.433 Device Reliability: OK 00:11:07.433 Read Only: No 00:11:07.433 Volatile Memory Backup: OK 00:11:07.433 Current Temperature: 323 Kelvin (50 Celsius) 00:11:07.433 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:07.433 Available Spare: 0% 00:11:07.433 Available Spare Threshold: 0% 00:11:07.433 Life Percentage Used: 0% 00:11:07.433 Data Units Read: 792 00:11:07.433 Data Units Written: 686 00:11:07.433 Host Read Commands: 34661 00:11:07.433 Host Write Commands: 33251 00:11:07.433 Controller Busy Time: 0 minutes 00:11:07.433 Power Cycles: 0 00:11:07.433 Power On Hours: 0 hours 00:11:07.433 Unsafe Shutdowns: 0 00:11:07.433 Unrecoverable Media Errors: 0 00:11:07.433 Lifetime Error Log Entries: 0 00:11:07.433 Warning Temperature Time: 0 minutes 00:11:07.433 Critical Temperature Time: 0 minutes 00:11:07.433 00:11:07.433 Number of Queues 00:11:07.433 ================ 00:11:07.433 Number of I/O Submission Queues: 64 00:11:07.433 Number of I/O Completion Queues: 64 00:11:07.433 00:11:07.433 ZNS Specific Controller Data 00:11:07.433 ============================ 00:11:07.433 Zone Append Size Limit: 0 00:11:07.433 00:11:07.433 00:11:07.433 Active Namespaces 00:11:07.433 ================= 00:11:07.433 Namespace ID:1 00:11:07.433 Error Recovery Timeout: Unlimited 00:11:07.433 Command Set Identifier: NVM (00h) 00:11:07.433 Deallocate: Supported 00:11:07.433 Deallocated/Unwritten Error: Supported 00:11:07.433 Deallocated Read Value: All 0x00 00:11:07.433 Deallocate in Write Zeroes: Not Supported 00:11:07.433 Deallocated Guard Field: 0xFFFF 00:11:07.433 Flush: Supported 00:11:07.433 Reservation: Not Supported 00:11:07.433 Namespace Sharing Capabilities: Multiple Controllers 00:11:07.433 Size (in LBAs): 262144 (1GiB) 00:11:07.433 Capacity (in LBAs): 262144 (1GiB) 00:11:07.433 Utilization (in LBAs): 262144 (1GiB) 00:11:07.433 Thin Provisioning: Not Supported 00:11:07.433 Per-NS Atomic Units: No 00:11:07.433 Maximum Single Source Range Length: 128 00:11:07.433 Maximum Copy Length: 128 00:11:07.433 Maximum Source Range Count: 128 00:11:07.433 NGUID/EUI64 Never Reused: No 00:11:07.433 Namespace Write Protected: No 00:11:07.433 Endurance group ID: 1 00:11:07.433 Number of LBA Formats: 8 00:11:07.433 Current LBA Format: LBA Format #04 00:11:07.433 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.433 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.433 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.433 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.433 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.433 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.433 LBA Format #06: Data Size[2024-05-15 18:02:59.717849] nvme_ctrlr.c:3485:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68845 terminated unexpected 00:11:07.433 : 4096 Metadata Size: 16 00:11:07.433 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.433 00:11:07.433 Get Feature FDP: 00:11:07.433 ================ 00:11:07.433 Enabled: Yes 00:11:07.433 FDP configuration index: 0 00:11:07.433 00:11:07.433 FDP configurations log page 00:11:07.433 =========================== 00:11:07.433 Number of FDP configurations: 1 00:11:07.433 Version: 0 00:11:07.433 Size: 112 00:11:07.433 FDP Configuration Descriptor: 0 00:11:07.433 Descriptor Size: 96 00:11:07.433 Reclaim Group Identifier format: 2 00:11:07.433 FDP Volatile Write Cache: Not Present 00:11:07.433 FDP Configuration: Valid 00:11:07.433 Vendor Specific Size: 0 00:11:07.433 Number of Reclaim Groups: 2 00:11:07.433 Number of Recalim Unit Handles: 8 00:11:07.433 Max Placement Identifiers: 128 00:11:07.433 Number of Namespaces Suppprted: 256 00:11:07.433 Reclaim unit Nominal Size: 6000000 bytes 00:11:07.434 Estimated Reclaim Unit Time Limit: Not Reported 00:11:07.434 RUH Desc #000: RUH Type: Initially Isolated 00:11:07.434 RUH Desc #001: RUH Type: Initially Isolated 00:11:07.434 RUH Desc #002: RUH Type: Initially Isolated 00:11:07.434 RUH Desc #003: RUH Type: Initially Isolated 00:11:07.434 RUH Desc #004: RUH Type: Initially Isolated 00:11:07.434 RUH Desc #005: RUH Type: Initially Isolated 00:11:07.434 RUH Desc #006: RUH Type: Initially Isolated 00:11:07.434 RUH Desc #007: RUH Type: Initially Isolated 00:11:07.434 00:11:07.434 FDP reclaim unit handle usage log page 00:11:07.434 ====================================== 00:11:07.434 Number of Reclaim Unit Handles: 8 00:11:07.434 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:07.434 RUH Usage Desc #001: RUH Attributes: Unused 00:11:07.434 RUH Usage Desc #002: RUH Attributes: Unused 00:11:07.434 RUH Usage Desc #003: RUH Attributes: Unused 00:11:07.434 RUH Usage Desc #004: RUH Attributes: Unused 00:11:07.434 RUH Usage Desc #005: RUH Attributes: Unused 00:11:07.434 RUH Usage Desc #006: RUH Attributes: Unused 00:11:07.434 RUH Usage Desc #007: RUH Attributes: Unused 00:11:07.434 00:11:07.434 FDP statistics log page 00:11:07.434 ======================= 00:11:07.434 Host bytes with metadata written: 430743552 00:11:07.434 Media bytes with metadata written: 430788608 00:11:07.434 Media bytes erased: 0 00:11:07.434 00:11:07.434 FDP events log page 00:11:07.434 =================== 00:11:07.434 Number of FDP events: 0 00:11:07.434 00:11:07.434 ===================================================== 00:11:07.434 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:07.434 ===================================================== 00:11:07.434 Controller Capabilities/Features 00:11:07.434 ================================ 00:11:07.434 Vendor ID: 1b36 00:11:07.434 Subsystem Vendor ID: 1af4 00:11:07.434 Serial Number: 12342 00:11:07.434 Model Number: QEMU NVMe Ctrl 00:11:07.434 Firmware Version: 8.0.0 00:11:07.434 Recommended Arb Burst: 6 00:11:07.434 IEEE OUI Identifier: 00 54 52 00:11:07.434 Multi-path I/O 00:11:07.434 May have multiple subsystem ports: No 00:11:07.434 May have multiple controllers: No 00:11:07.434 Associated with SR-IOV VF: No 00:11:07.434 Max Data Transfer Size: 524288 00:11:07.434 Max Number of Namespaces: 256 00:11:07.434 Max Number of I/O Queues: 64 00:11:07.434 NVMe Specification Version (VS): 1.4 00:11:07.434 NVMe Specification Version (Identify): 1.4 00:11:07.434 Maximum Queue Entries: 2048 00:11:07.434 Contiguous Queues Required: Yes 00:11:07.434 Arbitration Mechanisms Supported 00:11:07.434 Weighted Round Robin: Not Supported 00:11:07.434 Vendor Specific: Not Supported 00:11:07.434 Reset Timeout: 7500 ms 00:11:07.434 Doorbell Stride: 4 bytes 00:11:07.434 NVM Subsystem Reset: Not Supported 00:11:07.434 Command Sets Supported 00:11:07.434 NVM Command Set: Supported 00:11:07.434 Boot Partition: Not Supported 00:11:07.434 Memory Page Size Minimum: 4096 bytes 00:11:07.434 Memory Page Size Maximum: 65536 bytes 00:11:07.434 Persistent Memory Region: Not Supported 00:11:07.434 Optional Asynchronous Events Supported 00:11:07.434 Namespace Attribute Notices: Supported 00:11:07.434 Firmware Activation Notices: Not Supported 00:11:07.434 ANA Change Notices: Not Supported 00:11:07.434 PLE Aggregate Log Change Notices: Not Supported 00:11:07.434 LBA Status Info Alert Notices: Not Supported 00:11:07.434 EGE Aggregate Log Change Notices: Not Supported 00:11:07.434 Normal NVM Subsystem Shutdown event: Not Supported 00:11:07.434 Zone Descriptor Change Notices: Not Supported 00:11:07.434 Discovery Log Change Notices: Not Supported 00:11:07.434 Controller Attributes 00:11:07.434 128-bit Host Identifier: Not Supported 00:11:07.434 Non-Operational Permissive Mode: Not Supported 00:11:07.434 NVM Sets: Not Supported 00:11:07.434 Read Recovery Levels: Not Supported 00:11:07.434 Endurance Groups: Not Supported 00:11:07.434 Predictable Latency Mode: Not Supported 00:11:07.434 Traffic Based Keep ALive: Not Supported 00:11:07.434 Namespace Granularity: Not Supported 00:11:07.434 SQ Associations: Not Supported 00:11:07.434 UUID List: Not Supported 00:11:07.434 Multi-Domain Subsystem: Not Supported 00:11:07.434 Fixed Capacity Management: Not Supported 00:11:07.434 Variable Capacity Management: Not Supported 00:11:07.434 Delete Endurance Group: Not Supported 00:11:07.434 Delete NVM Set: Not Supported 00:11:07.434 Extended LBA Formats Supported: Supported 00:11:07.434 Flexible Data Placement Supported: Not Supported 00:11:07.434 00:11:07.434 Controller Memory Buffer Support 00:11:07.434 ================================ 00:11:07.434 Supported: No 00:11:07.434 00:11:07.434 Persistent Memory Region Support 00:11:07.434 ================================ 00:11:07.434 Supported: No 00:11:07.434 00:11:07.434 Admin Command Set Attributes 00:11:07.434 ============================ 00:11:07.434 Security Send/Receive: Not Supported 00:11:07.434 Format NVM: Supported 00:11:07.434 Firmware Activate/Download: Not Supported 00:11:07.434 Namespace Management: Supported 00:11:07.434 Device Self-Test: Not Supported 00:11:07.434 Directives: Supported 00:11:07.434 NVMe-MI: Not Supported 00:11:07.434 Virtualization Management: Not Supported 00:11:07.434 Doorbell Buffer Config: Supported 00:11:07.434 Get LBA Status Capability: Not Supported 00:11:07.434 Command & Feature Lockdown Capability: Not Supported 00:11:07.434 Abort Command Limit: 4 00:11:07.434 Async Event Request Limit: 4 00:11:07.434 Number of Firmware Slots: N/A 00:11:07.434 Firmware Slot 1 Read-Only: N/A 00:11:07.435 Firmware Activation Without Reset: N/A 00:11:07.435 Multiple Update Detection Support: N/A 00:11:07.435 Firmware Update Granularity: No Information Provided 00:11:07.435 Per-Namespace SMART Log: Yes 00:11:07.435 Asymmetric Namespace Access Log Page: Not Supported 00:11:07.435 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:07.435 Command Effects Log Page: Supported 00:11:07.435 Get Log Page Extended Data: Supported 00:11:07.435 Telemetry Log Pages: Not Supported 00:11:07.435 Persistent Event Log Pages: Not Supported 00:11:07.435 Supported Log Pages Log Page: May Support 00:11:07.435 Commands Supported & Effects Log Page: Not Supported 00:11:07.435 Feature Identifiers & Effects Log Page:May Support 00:11:07.435 NVMe-MI Commands & Effects Log Page: May Support 00:11:07.435 Data Area 4 for Telemetry Log: Not Supported 00:11:07.435 Error Log Page Entries Supported: 1 00:11:07.435 Keep Alive: Not Supported 00:11:07.435 00:11:07.435 NVM Command Set Attributes 00:11:07.435 ========================== 00:11:07.435 Submission Queue Entry Size 00:11:07.435 Max: 64 00:11:07.435 Min: 64 00:11:07.435 Completion Queue Entry Size 00:11:07.435 Max: 16 00:11:07.435 Min: 16 00:11:07.435 Number of Namespaces: 256 00:11:07.435 Compare Command: Supported 00:11:07.435 Write Uncorrectable Command: Not Supported 00:11:07.435 Dataset Management Command: Supported 00:11:07.435 Write Zeroes Command: Supported 00:11:07.435 Set Features Save Field: Supported 00:11:07.435 Reservations: Not Supported 00:11:07.435 Timestamp: Supported 00:11:07.435 Copy: Supported 00:11:07.435 Volatile Write Cache: Present 00:11:07.435 Atomic Write Unit (Normal): 1 00:11:07.435 Atomic Write Unit (PFail): 1 00:11:07.435 Atomic Compare & Write Unit: 1 00:11:07.435 Fused Compare & Write: Not Supported 00:11:07.435 Scatter-Gather List 00:11:07.435 SGL Command Set: Supported 00:11:07.435 SGL Keyed: Not Supported 00:11:07.435 SGL Bit Bucket Descriptor: Not Supported 00:11:07.435 SGL Metadata Pointer: Not Supported 00:11:07.435 Oversized SGL: Not Supported 00:11:07.435 SGL Metadata Address: Not Supported 00:11:07.435 SGL Offset: Not Supported 00:11:07.435 Transport SGL Data Block: Not Supported 00:11:07.435 Replay Protected Memory Block: Not Supported 00:11:07.435 00:11:07.435 Firmware Slot Information 00:11:07.435 ========================= 00:11:07.435 Active slot: 1 00:11:07.435 Slot 1 Firmware Revision: 1.0 00:11:07.435 00:11:07.435 00:11:07.435 Commands Supported and Effects 00:11:07.435 ============================== 00:11:07.435 Admin Commands 00:11:07.435 -------------- 00:11:07.435 Delete I/O Submission Queue (00h): Supported 00:11:07.435 Create I/O Submission Queue (01h): Supported 00:11:07.435 Get Log Page (02h): Supported 00:11:07.435 Delete I/O Completion Queue (04h): Supported 00:11:07.435 Create I/O Completion Queue (05h): Supported 00:11:07.435 Identify (06h): Supported 00:11:07.435 Abort (08h): Supported 00:11:07.435 Set Features (09h): Supported 00:11:07.435 Get Features (0Ah): Supported 00:11:07.435 Asynchronous Event Request (0Ch): Supported 00:11:07.435 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:07.435 Directive Send (19h): Supported 00:11:07.435 Directive Receive (1Ah): Supported 00:11:07.435 Virtualization Management (1Ch): Supported 00:11:07.435 Doorbell Buffer Config (7Ch): Supported 00:11:07.435 Format NVM (80h): Supported LBA-Change 00:11:07.435 I/O Commands 00:11:07.435 ------------ 00:11:07.435 Flush (00h): Supported LBA-Change 00:11:07.435 Write (01h): Supported LBA-Change 00:11:07.435 Read (02h): Supported 00:11:07.435 Compare (05h): Supported 00:11:07.435 Write Zeroes (08h): Supported LBA-Change 00:11:07.435 Dataset Management (09h): Supported LBA-Change 00:11:07.435 Unknown (0Ch): Supported 00:11:07.435 Unknown (12h): Supported 00:11:07.435 Copy (19h): Supported LBA-Change 00:11:07.435 Unknown (1Dh): Supported LBA-Change 00:11:07.435 00:11:07.435 Error Log 00:11:07.435 ========= 00:11:07.435 00:11:07.435 Arbitration 00:11:07.435 =========== 00:11:07.435 Arbitration Burst: no limit 00:11:07.435 00:11:07.435 Power Management 00:11:07.435 ================ 00:11:07.435 Number of Power States: 1 00:11:07.435 Current Power State: Power State #0 00:11:07.435 Power State #0: 00:11:07.435 Max Power: 25.00 W 00:11:07.435 Non-Operational State: Operational 00:11:07.435 Entry Latency: 16 microseconds 00:11:07.435 Exit Latency: 4 microseconds 00:11:07.435 Relative Read Throughput: 0 00:11:07.435 Relative Read Latency: 0 00:11:07.435 Relative Write Throughput: 0 00:11:07.435 Relative Write Latency: 0 00:11:07.435 Idle Power: Not Reported 00:11:07.435 Active Power: Not Reported 00:11:07.435 Non-Operational Permissive Mode: Not Supported 00:11:07.435 00:11:07.435 Health Information 00:11:07.435 ================== 00:11:07.435 Critical Warnings: 00:11:07.435 Available Spare Space: OK 00:11:07.435 Temperature: OK 00:11:07.435 Device Reliability: OK 00:11:07.435 Read Only: No 00:11:07.435 Volatile Memory Backup: OK 00:11:07.435 Current Temperature: 323 Kelvin (50 Celsius) 00:11:07.436 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:07.436 Available Spare: 0% 00:11:07.436 Available Spare Threshold: 0% 00:11:07.436 Life Percentage Used: 0% 00:11:07.436 Data Units Read: 2222 00:11:07.436 Data Units Written: 1902 00:11:07.436 Host Read Commands: 102301 00:11:07.436 Host Write Commands: 98071 00:11:07.436 Controller Busy Time: 0 minutes 00:11:07.436 Power Cycles: 0 00:11:07.436 Power On Hours: 0 hours 00:11:07.436 Unsafe Shutdowns: 0 00:11:07.436 Unrecoverable Media Errors: 0 00:11:07.436 Lifetime Error Log Entries: 0 00:11:07.436 Warning Temperature Time: 0 minutes 00:11:07.436 Critical Temperature Time: 0 minutes 00:11:07.436 00:11:07.436 Number of Queues 00:11:07.436 ================ 00:11:07.436 Number of I/O Submission Queues: 64 00:11:07.436 Number of I/O Completion Queues: 64 00:11:07.436 00:11:07.436 ZNS Specific Controller Data 00:11:07.436 ============================ 00:11:07.436 Zone Append Size Limit: 0 00:11:07.436 00:11:07.436 00:11:07.436 Active Namespaces 00:11:07.436 ================= 00:11:07.436 Namespace ID:1 00:11:07.436 Error Recovery Timeout: Unlimited 00:11:07.436 Command Set Identifier: NVM (00h) 00:11:07.436 Deallocate: Supported 00:11:07.436 Deallocated/Unwritten Error: Supported 00:11:07.436 Deallocated Read Value: All 0x00 00:11:07.436 Deallocate in Write Zeroes: Not Supported 00:11:07.436 Deallocated Guard Field: 0xFFFF 00:11:07.436 Flush: Supported 00:11:07.436 Reservation: Not Supported 00:11:07.436 Namespace Sharing Capabilities: Private 00:11:07.436 Size (in LBAs): 1048576 (4GiB) 00:11:07.436 Capacity (in LBAs): 1048576 (4GiB) 00:11:07.436 Utilization (in LBAs): 1048576 (4GiB) 00:11:07.436 Thin Provisioning: Not Supported 00:11:07.436 Per-NS Atomic Units: No 00:11:07.436 Maximum Single Source Range Length: 128 00:11:07.436 Maximum Copy Length: 128 00:11:07.436 Maximum Source Range Count: 128 00:11:07.436 NGUID/EUI64 Never Reused: No 00:11:07.436 Namespace Write Protected: No 00:11:07.436 Number of LBA Formats: 8 00:11:07.436 Current LBA Format: LBA Format #04 00:11:07.436 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.436 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.436 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.436 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.436 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.436 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.436 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:07.436 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.436 00:11:07.436 Namespace ID:2 00:11:07.436 Error Recovery Timeout: Unlimited 00:11:07.436 Command Set Identifier: NVM (00h) 00:11:07.436 Deallocate: Supported 00:11:07.436 Deallocated/Unwritten Error: Supported 00:11:07.436 Deallocated Read Value: All 0x00 00:11:07.436 Deallocate in Write Zeroes: Not Supported 00:11:07.436 Deallocated Guard Field: 0xFFFF 00:11:07.436 Flush: Supported 00:11:07.436 Reservation: Not Supported 00:11:07.436 Namespace Sharing Capabilities: Private 00:11:07.436 Size (in LBAs): 1048576 (4GiB) 00:11:07.436 Capacity (in LBAs): 1048576 (4GiB) 00:11:07.436 Utilization (in LBAs): 1048576 (4GiB) 00:11:07.436 Thin Provisioning: Not Supported 00:11:07.436 Per-NS Atomic Units: No 00:11:07.436 Maximum Single Source Range Length: 128 00:11:07.436 Maximum Copy Length: 128 00:11:07.436 Maximum Source Range Count: 128 00:11:07.436 NGUID/EUI64 Never Reused: No 00:11:07.436 Namespace Write Protected: No 00:11:07.436 Number of LBA Formats: 8 00:11:07.436 Current LBA Format: LBA Format #04 00:11:07.436 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.436 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.436 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.436 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.436 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.436 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.436 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:07.436 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.436 00:11:07.436 Namespace ID:3 00:11:07.436 Error Recovery Timeout: Unlimited 00:11:07.436 Command Set Identifier: NVM (00h) 00:11:07.436 Deallocate: Supported 00:11:07.436 Deallocated/Unwritten Error: Supported 00:11:07.436 Deallocated Read Value: All 0x00 00:11:07.436 Deallocate in Write Zeroes: Not Supported 00:11:07.436 Deallocated Guard Field: 0xFFFF 00:11:07.436 Flush: Supported 00:11:07.436 Reservation: Not Supported 00:11:07.436 Namespace Sharing Capabilities: Private 00:11:07.436 Size (in LBAs): 1048576 (4GiB) 00:11:07.436 Capacity (in LBAs): 1048576 (4GiB) 00:11:07.436 Utilization (in LBAs): 1048576 (4GiB) 00:11:07.436 Thin Provisioning: Not Supported 00:11:07.436 Per-NS Atomic Units: No 00:11:07.436 Maximum Single Source Range Length: 128 00:11:07.436 Maximum Copy Length: 128 00:11:07.436 Maximum Source Range Count: 128 00:11:07.436 NGUID/EUI64 Never Reused: No 00:11:07.436 Namespace Write Protected: No 00:11:07.436 Number of LBA Formats: 8 00:11:07.436 Current LBA Format: LBA Format #04 00:11:07.436 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.436 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.436 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.436 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.436 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.436 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.436 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:07.436 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.436 00:11:07.436 18:02:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:07.436 18:02:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:07.696 ===================================================== 00:11:07.696 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:07.696 ===================================================== 00:11:07.696 Controller Capabilities/Features 00:11:07.696 ================================ 00:11:07.696 Vendor ID: 1b36 00:11:07.696 Subsystem Vendor ID: 1af4 00:11:07.696 Serial Number: 12340 00:11:07.696 Model Number: QEMU NVMe Ctrl 00:11:07.696 Firmware Version: 8.0.0 00:11:07.696 Recommended Arb Burst: 6 00:11:07.696 IEEE OUI Identifier: 00 54 52 00:11:07.696 Multi-path I/O 00:11:07.696 May have multiple subsystem ports: No 00:11:07.696 May have multiple controllers: No 00:11:07.696 Associated with SR-IOV VF: No 00:11:07.696 Max Data Transfer Size: 524288 00:11:07.696 Max Number of Namespaces: 256 00:11:07.696 Max Number of I/O Queues: 64 00:11:07.696 NVMe Specification Version (VS): 1.4 00:11:07.696 NVMe Specification Version (Identify): 1.4 00:11:07.696 Maximum Queue Entries: 2048 00:11:07.696 Contiguous Queues Required: Yes 00:11:07.696 Arbitration Mechanisms Supported 00:11:07.696 Weighted Round Robin: Not Supported 00:11:07.696 Vendor Specific: Not Supported 00:11:07.696 Reset Timeout: 7500 ms 00:11:07.696 Doorbell Stride: 4 bytes 00:11:07.696 NVM Subsystem Reset: Not Supported 00:11:07.696 Command Sets Supported 00:11:07.696 NVM Command Set: Supported 00:11:07.696 Boot Partition: Not Supported 00:11:07.696 Memory Page Size Minimum: 4096 bytes 00:11:07.696 Memory Page Size Maximum: 65536 bytes 00:11:07.696 Persistent Memory Region: Not Supported 00:11:07.696 Optional Asynchronous Events Supported 00:11:07.696 Namespace Attribute Notices: Supported 00:11:07.696 Firmware Activation Notices: Not Supported 00:11:07.696 ANA Change Notices: Not Supported 00:11:07.696 PLE Aggregate Log Change Notices: Not Supported 00:11:07.696 LBA Status Info Alert Notices: Not Supported 00:11:07.696 EGE Aggregate Log Change Notices: Not Supported 00:11:07.696 Normal NVM Subsystem Shutdown event: Not Supported 00:11:07.696 Zone Descriptor Change Notices: Not Supported 00:11:07.696 Discovery Log Change Notices: Not Supported 00:11:07.696 Controller Attributes 00:11:07.696 128-bit Host Identifier: Not Supported 00:11:07.696 Non-Operational Permissive Mode: Not Supported 00:11:07.696 NVM Sets: Not Supported 00:11:07.696 Read Recovery Levels: Not Supported 00:11:07.696 Endurance Groups: Not Supported 00:11:07.696 Predictable Latency Mode: Not Supported 00:11:07.696 Traffic Based Keep ALive: Not Supported 00:11:07.696 Namespace Granularity: Not Supported 00:11:07.696 SQ Associations: Not Supported 00:11:07.696 UUID List: Not Supported 00:11:07.696 Multi-Domain Subsystem: Not Supported 00:11:07.696 Fixed Capacity Management: Not Supported 00:11:07.696 Variable Capacity Management: Not Supported 00:11:07.696 Delete Endurance Group: Not Supported 00:11:07.696 Delete NVM Set: Not Supported 00:11:07.696 Extended LBA Formats Supported: Supported 00:11:07.696 Flexible Data Placement Supported: Not Supported 00:11:07.696 00:11:07.696 Controller Memory Buffer Support 00:11:07.696 ================================ 00:11:07.696 Supported: No 00:11:07.696 00:11:07.696 Persistent Memory Region Support 00:11:07.696 ================================ 00:11:07.696 Supported: No 00:11:07.696 00:11:07.696 Admin Command Set Attributes 00:11:07.696 ============================ 00:11:07.696 Security Send/Receive: Not Supported 00:11:07.696 Format NVM: Supported 00:11:07.696 Firmware Activate/Download: Not Supported 00:11:07.696 Namespace Management: Supported 00:11:07.696 Device Self-Test: Not Supported 00:11:07.696 Directives: Supported 00:11:07.696 NVMe-MI: Not Supported 00:11:07.696 Virtualization Management: Not Supported 00:11:07.696 Doorbell Buffer Config: Supported 00:11:07.696 Get LBA Status Capability: Not Supported 00:11:07.696 Command & Feature Lockdown Capability: Not Supported 00:11:07.696 Abort Command Limit: 4 00:11:07.696 Async Event Request Limit: 4 00:11:07.696 Number of Firmware Slots: N/A 00:11:07.696 Firmware Slot 1 Read-Only: N/A 00:11:07.696 Firmware Activation Without Reset: N/A 00:11:07.696 Multiple Update Detection Support: N/A 00:11:07.696 Firmware Update Granularity: No Information Provided 00:11:07.696 Per-Namespace SMART Log: Yes 00:11:07.696 Asymmetric Namespace Access Log Page: Not Supported 00:11:07.696 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:07.696 Command Effects Log Page: Supported 00:11:07.696 Get Log Page Extended Data: Supported 00:11:07.696 Telemetry Log Pages: Not Supported 00:11:07.696 Persistent Event Log Pages: Not Supported 00:11:07.696 Supported Log Pages Log Page: May Support 00:11:07.696 Commands Supported & Effects Log Page: Not Supported 00:11:07.696 Feature Identifiers & Effects Log Page:May Support 00:11:07.696 NVMe-MI Commands & Effects Log Page: May Support 00:11:07.696 Data Area 4 for Telemetry Log: Not Supported 00:11:07.696 Error Log Page Entries Supported: 1 00:11:07.696 Keep Alive: Not Supported 00:11:07.696 00:11:07.696 NVM Command Set Attributes 00:11:07.696 ========================== 00:11:07.696 Submission Queue Entry Size 00:11:07.696 Max: 64 00:11:07.696 Min: 64 00:11:07.697 Completion Queue Entry Size 00:11:07.697 Max: 16 00:11:07.697 Min: 16 00:11:07.697 Number of Namespaces: 256 00:11:07.697 Compare Command: Supported 00:11:07.697 Write Uncorrectable Command: Not Supported 00:11:07.697 Dataset Management Command: Supported 00:11:07.697 Write Zeroes Command: Supported 00:11:07.697 Set Features Save Field: Supported 00:11:07.697 Reservations: Not Supported 00:11:07.697 Timestamp: Supported 00:11:07.697 Copy: Supported 00:11:07.697 Volatile Write Cache: Present 00:11:07.697 Atomic Write Unit (Normal): 1 00:11:07.697 Atomic Write Unit (PFail): 1 00:11:07.697 Atomic Compare & Write Unit: 1 00:11:07.697 Fused Compare & Write: Not Supported 00:11:07.697 Scatter-Gather List 00:11:07.697 SGL Command Set: Supported 00:11:07.697 SGL Keyed: Not Supported 00:11:07.697 SGL Bit Bucket Descriptor: Not Supported 00:11:07.697 SGL Metadata Pointer: Not Supported 00:11:07.697 Oversized SGL: Not Supported 00:11:07.697 SGL Metadata Address: Not Supported 00:11:07.697 SGL Offset: Not Supported 00:11:07.697 Transport SGL Data Block: Not Supported 00:11:07.697 Replay Protected Memory Block: Not Supported 00:11:07.697 00:11:07.697 Firmware Slot Information 00:11:07.697 ========================= 00:11:07.697 Active slot: 1 00:11:07.697 Slot 1 Firmware Revision: 1.0 00:11:07.697 00:11:07.697 00:11:07.697 Commands Supported and Effects 00:11:07.697 ============================== 00:11:07.697 Admin Commands 00:11:07.697 -------------- 00:11:07.697 Delete I/O Submission Queue (00h): Supported 00:11:07.697 Create I/O Submission Queue (01h): Supported 00:11:07.697 Get Log Page (02h): Supported 00:11:07.697 Delete I/O Completion Queue (04h): Supported 00:11:07.697 Create I/O Completion Queue (05h): Supported 00:11:07.697 Identify (06h): Supported 00:11:07.697 Abort (08h): Supported 00:11:07.697 Set Features (09h): Supported 00:11:07.697 Get Features (0Ah): Supported 00:11:07.697 Asynchronous Event Request (0Ch): Supported 00:11:07.697 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:07.697 Directive Send (19h): Supported 00:11:07.697 Directive Receive (1Ah): Supported 00:11:07.697 Virtualization Management (1Ch): Supported 00:11:07.697 Doorbell Buffer Config (7Ch): Supported 00:11:07.697 Format NVM (80h): Supported LBA-Change 00:11:07.697 I/O Commands 00:11:07.697 ------------ 00:11:07.697 Flush (00h): Supported LBA-Change 00:11:07.697 Write (01h): Supported LBA-Change 00:11:07.697 Read (02h): Supported 00:11:07.697 Compare (05h): Supported 00:11:07.697 Write Zeroes (08h): Supported LBA-Change 00:11:07.697 Dataset Management (09h): Supported LBA-Change 00:11:07.697 Unknown (0Ch): Supported 00:11:07.697 Unknown (12h): Supported 00:11:07.697 Copy (19h): Supported LBA-Change 00:11:07.697 Unknown (1Dh): Supported LBA-Change 00:11:07.697 00:11:07.697 Error Log 00:11:07.697 ========= 00:11:07.697 00:11:07.697 Arbitration 00:11:07.697 =========== 00:11:07.697 Arbitration Burst: no limit 00:11:07.697 00:11:07.697 Power Management 00:11:07.697 ================ 00:11:07.697 Number of Power States: 1 00:11:07.697 Current Power State: Power State #0 00:11:07.697 Power State #0: 00:11:07.697 Max Power: 25.00 W 00:11:07.697 Non-Operational State: Operational 00:11:07.697 Entry Latency: 16 microseconds 00:11:07.697 Exit Latency: 4 microseconds 00:11:07.697 Relative Read Throughput: 0 00:11:07.697 Relative Read Latency: 0 00:11:07.697 Relative Write Throughput: 0 00:11:07.697 Relative Write Latency: 0 00:11:07.697 Idle Power: Not Reported 00:11:07.697 Active Power: Not Reported 00:11:07.697 Non-Operational Permissive Mode: Not Supported 00:11:07.697 00:11:07.697 Health Information 00:11:07.697 ================== 00:11:07.697 Critical Warnings: 00:11:07.697 Available Spare Space: OK 00:11:07.697 Temperature: OK 00:11:07.697 Device Reliability: OK 00:11:07.697 Read Only: No 00:11:07.697 Volatile Memory Backup: OK 00:11:07.697 Current Temperature: 323 Kelvin (50 Celsius) 00:11:07.697 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:07.697 Available Spare: 0% 00:11:07.697 Available Spare Threshold: 0% 00:11:07.697 Life Percentage Used: 0% 00:11:07.697 Data Units Read: 1049 00:11:07.697 Data Units Written: 882 00:11:07.697 Host Read Commands: 49137 00:11:07.697 Host Write Commands: 47639 00:11:07.697 Controller Busy Time: 0 minutes 00:11:07.697 Power Cycles: 0 00:11:07.697 Power On Hours: 0 hours 00:11:07.697 Unsafe Shutdowns: 0 00:11:07.697 Unrecoverable Media Errors: 0 00:11:07.697 Lifetime Error Log Entries: 0 00:11:07.697 Warning Temperature Time: 0 minutes 00:11:07.697 Critical Temperature Time: 0 minutes 00:11:07.697 00:11:07.697 Number of Queues 00:11:07.697 ================ 00:11:07.697 Number of I/O Submission Queues: 64 00:11:07.697 Number of I/O Completion Queues: 64 00:11:07.697 00:11:07.697 ZNS Specific Controller Data 00:11:07.697 ============================ 00:11:07.697 Zone Append Size Limit: 0 00:11:07.697 00:11:07.697 00:11:07.697 Active Namespaces 00:11:07.697 ================= 00:11:07.697 Namespace ID:1 00:11:07.697 Error Recovery Timeout: Unlimited 00:11:07.697 Command Set Identifier: NVM (00h) 00:11:07.697 Deallocate: Supported 00:11:07.697 Deallocated/Unwritten Error: Supported 00:11:07.697 Deallocated Read Value: All 0x00 00:11:07.697 Deallocate in Write Zeroes: Not Supported 00:11:07.697 Deallocated Guard Field: 0xFFFF 00:11:07.697 Flush: Supported 00:11:07.697 Reservation: Not Supported 00:11:07.697 Metadata Transferred as: Separate Metadata Buffer 00:11:07.697 Namespace Sharing Capabilities: Private 00:11:07.697 Size (in LBAs): 1548666 (5GiB) 00:11:07.697 Capacity (in LBAs): 1548666 (5GiB) 00:11:07.697 Utilization (in LBAs): 1548666 (5GiB) 00:11:07.697 Thin Provisioning: Not Supported 00:11:07.697 Per-NS Atomic Units: No 00:11:07.697 Maximum Single Source Range Length: 128 00:11:07.697 Maximum Copy Length: 128 00:11:07.697 Maximum Source Range Count: 128 00:11:07.697 NGUID/EUI64 Never Reused: No 00:11:07.697 Namespace Write Protected: No 00:11:07.697 Number of LBA Formats: 8 00:11:07.697 Current LBA Format: LBA Format #07 00:11:07.697 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.697 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.697 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.697 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.697 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.697 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.697 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:07.697 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.697 00:11:07.697 18:03:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:07.697 18:03:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:07.956 ===================================================== 00:11:07.956 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:07.956 ===================================================== 00:11:07.956 Controller Capabilities/Features 00:11:07.956 ================================ 00:11:07.956 Vendor ID: 1b36 00:11:07.956 Subsystem Vendor ID: 1af4 00:11:07.957 Serial Number: 12341 00:11:07.957 Model Number: QEMU NVMe Ctrl 00:11:07.957 Firmware Version: 8.0.0 00:11:07.957 Recommended Arb Burst: 6 00:11:07.957 IEEE OUI Identifier: 00 54 52 00:11:07.957 Multi-path I/O 00:11:07.957 May have multiple subsystem ports: No 00:11:07.957 May have multiple controllers: No 00:11:07.957 Associated with SR-IOV VF: No 00:11:07.957 Max Data Transfer Size: 524288 00:11:07.957 Max Number of Namespaces: 256 00:11:07.957 Max Number of I/O Queues: 64 00:11:07.957 NVMe Specification Version (VS): 1.4 00:11:07.957 NVMe Specification Version (Identify): 1.4 00:11:07.957 Maximum Queue Entries: 2048 00:11:07.957 Contiguous Queues Required: Yes 00:11:07.957 Arbitration Mechanisms Supported 00:11:07.957 Weighted Round Robin: Not Supported 00:11:07.957 Vendor Specific: Not Supported 00:11:07.957 Reset Timeout: 7500 ms 00:11:07.957 Doorbell Stride: 4 bytes 00:11:07.957 NVM Subsystem Reset: Not Supported 00:11:07.957 Command Sets Supported 00:11:07.957 NVM Command Set: Supported 00:11:07.957 Boot Partition: Not Supported 00:11:07.957 Memory Page Size Minimum: 4096 bytes 00:11:07.957 Memory Page Size Maximum: 65536 bytes 00:11:07.957 Persistent Memory Region: Not Supported 00:11:07.957 Optional Asynchronous Events Supported 00:11:07.957 Namespace Attribute Notices: Supported 00:11:07.957 Firmware Activation Notices: Not Supported 00:11:07.957 ANA Change Notices: Not Supported 00:11:07.957 PLE Aggregate Log Change Notices: Not Supported 00:11:07.957 LBA Status Info Alert Notices: Not Supported 00:11:07.957 EGE Aggregate Log Change Notices: Not Supported 00:11:07.957 Normal NVM Subsystem Shutdown event: Not Supported 00:11:07.957 Zone Descriptor Change Notices: Not Supported 00:11:07.957 Discovery Log Change Notices: Not Supported 00:11:07.957 Controller Attributes 00:11:07.957 128-bit Host Identifier: Not Supported 00:11:07.957 Non-Operational Permissive Mode: Not Supported 00:11:07.957 NVM Sets: Not Supported 00:11:07.957 Read Recovery Levels: Not Supported 00:11:07.957 Endurance Groups: Not Supported 00:11:07.957 Predictable Latency Mode: Not Supported 00:11:07.957 Traffic Based Keep ALive: Not Supported 00:11:07.957 Namespace Granularity: Not Supported 00:11:07.957 SQ Associations: Not Supported 00:11:07.957 UUID List: Not Supported 00:11:07.957 Multi-Domain Subsystem: Not Supported 00:11:07.957 Fixed Capacity Management: Not Supported 00:11:07.957 Variable Capacity Management: Not Supported 00:11:07.957 Delete Endurance Group: Not Supported 00:11:07.957 Delete NVM Set: Not Supported 00:11:07.957 Extended LBA Formats Supported: Supported 00:11:07.957 Flexible Data Placement Supported: Not Supported 00:11:07.957 00:11:07.957 Controller Memory Buffer Support 00:11:07.957 ================================ 00:11:07.957 Supported: No 00:11:07.957 00:11:07.957 Persistent Memory Region Support 00:11:07.957 ================================ 00:11:07.957 Supported: No 00:11:07.957 00:11:07.957 Admin Command Set Attributes 00:11:07.957 ============================ 00:11:07.957 Security Send/Receive: Not Supported 00:11:07.957 Format NVM: Supported 00:11:07.957 Firmware Activate/Download: Not Supported 00:11:07.957 Namespace Management: Supported 00:11:07.957 Device Self-Test: Not Supported 00:11:07.957 Directives: Supported 00:11:07.957 NVMe-MI: Not Supported 00:11:07.957 Virtualization Management: Not Supported 00:11:07.957 Doorbell Buffer Config: Supported 00:11:07.957 Get LBA Status Capability: Not Supported 00:11:07.957 Command & Feature Lockdown Capability: Not Supported 00:11:07.957 Abort Command Limit: 4 00:11:07.957 Async Event Request Limit: 4 00:11:07.957 Number of Firmware Slots: N/A 00:11:07.957 Firmware Slot 1 Read-Only: N/A 00:11:07.957 Firmware Activation Without Reset: N/A 00:11:07.957 Multiple Update Detection Support: N/A 00:11:07.957 Firmware Update Granularity: No Information Provided 00:11:07.957 Per-Namespace SMART Log: Yes 00:11:07.957 Asymmetric Namespace Access Log Page: Not Supported 00:11:07.957 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:07.957 Command Effects Log Page: Supported 00:11:07.957 Get Log Page Extended Data: Supported 00:11:07.957 Telemetry Log Pages: Not Supported 00:11:07.957 Persistent Event Log Pages: Not Supported 00:11:07.957 Supported Log Pages Log Page: May Support 00:11:07.957 Commands Supported & Effects Log Page: Not Supported 00:11:07.957 Feature Identifiers & Effects Log Page:May Support 00:11:07.957 NVMe-MI Commands & Effects Log Page: May Support 00:11:07.957 Data Area 4 for Telemetry Log: Not Supported 00:11:07.957 Error Log Page Entries Supported: 1 00:11:07.957 Keep Alive: Not Supported 00:11:07.957 00:11:07.957 NVM Command Set Attributes 00:11:07.957 ========================== 00:11:07.957 Submission Queue Entry Size 00:11:07.957 Max: 64 00:11:07.957 Min: 64 00:11:07.957 Completion Queue Entry Size 00:11:07.957 Max: 16 00:11:07.957 Min: 16 00:11:07.957 Number of Namespaces: 256 00:11:07.957 Compare Command: Supported 00:11:07.957 Write Uncorrectable Command: Not Supported 00:11:07.957 Dataset Management Command: Supported 00:11:07.957 Write Zeroes Command: Supported 00:11:07.957 Set Features Save Field: Supported 00:11:07.957 Reservations: Not Supported 00:11:07.957 Timestamp: Supported 00:11:07.957 Copy: Supported 00:11:07.957 Volatile Write Cache: Present 00:11:07.957 Atomic Write Unit (Normal): 1 00:11:07.957 Atomic Write Unit (PFail): 1 00:11:07.957 Atomic Compare & Write Unit: 1 00:11:07.957 Fused Compare & Write: Not Supported 00:11:07.957 Scatter-Gather List 00:11:07.957 SGL Command Set: Supported 00:11:07.957 SGL Keyed: Not Supported 00:11:07.957 SGL Bit Bucket Descriptor: Not Supported 00:11:07.957 SGL Metadata Pointer: Not Supported 00:11:07.957 Oversized SGL: Not Supported 00:11:07.957 SGL Metadata Address: Not Supported 00:11:07.957 SGL Offset: Not Supported 00:11:07.957 Transport SGL Data Block: Not Supported 00:11:07.957 Replay Protected Memory Block: Not Supported 00:11:07.957 00:11:07.957 Firmware Slot Information 00:11:07.957 ========================= 00:11:07.957 Active slot: 1 00:11:07.957 Slot 1 Firmware Revision: 1.0 00:11:07.957 00:11:07.957 00:11:07.957 Commands Supported and Effects 00:11:07.957 ============================== 00:11:07.957 Admin Commands 00:11:07.957 -------------- 00:11:07.957 Delete I/O Submission Queue (00h): Supported 00:11:07.957 Create I/O Submission Queue (01h): Supported 00:11:07.957 Get Log Page (02h): Supported 00:11:07.957 Delete I/O Completion Queue (04h): Supported 00:11:07.957 Create I/O Completion Queue (05h): Supported 00:11:07.957 Identify (06h): Supported 00:11:07.957 Abort (08h): Supported 00:11:07.958 Set Features (09h): Supported 00:11:07.958 Get Features (0Ah): Supported 00:11:07.958 Asynchronous Event Request (0Ch): Supported 00:11:07.958 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:07.958 Directive Send (19h): Supported 00:11:07.958 Directive Receive (1Ah): Supported 00:11:07.958 Virtualization Management (1Ch): Supported 00:11:07.958 Doorbell Buffer Config (7Ch): Supported 00:11:07.958 Format NVM (80h): Supported LBA-Change 00:11:07.958 I/O Commands 00:11:07.958 ------------ 00:11:07.958 Flush (00h): Supported LBA-Change 00:11:07.958 Write (01h): Supported LBA-Change 00:11:07.958 Read (02h): Supported 00:11:07.958 Compare (05h): Supported 00:11:07.958 Write Zeroes (08h): Supported LBA-Change 00:11:07.958 Dataset Management (09h): Supported LBA-Change 00:11:07.958 Unknown (0Ch): Supported 00:11:07.958 Unknown (12h): Supported 00:11:07.958 Copy (19h): Supported LBA-Change 00:11:07.958 Unknown (1Dh): Supported LBA-Change 00:11:07.958 00:11:07.958 Error Log 00:11:07.958 ========= 00:11:07.958 00:11:07.958 Arbitration 00:11:07.958 =========== 00:11:07.958 Arbitration Burst: no limit 00:11:07.958 00:11:07.958 Power Management 00:11:07.958 ================ 00:11:07.958 Number of Power States: 1 00:11:07.958 Current Power State: Power State #0 00:11:07.958 Power State #0: 00:11:07.958 Max Power: 25.00 W 00:11:07.958 Non-Operational State: Operational 00:11:07.958 Entry Latency: 16 microseconds 00:11:07.958 Exit Latency: 4 microseconds 00:11:07.958 Relative Read Throughput: 0 00:11:07.958 Relative Read Latency: 0 00:11:07.958 Relative Write Throughput: 0 00:11:07.958 Relative Write Latency: 0 00:11:07.958 Idle Power: Not Reported 00:11:07.958 Active Power: Not Reported 00:11:07.958 Non-Operational Permissive Mode: Not Supported 00:11:07.958 00:11:07.958 Health Information 00:11:07.958 ================== 00:11:07.958 Critical Warnings: 00:11:07.958 Available Spare Space: OK 00:11:07.958 Temperature: OK 00:11:07.958 Device Reliability: OK 00:11:07.958 Read Only: No 00:11:07.958 Volatile Memory Backup: OK 00:11:07.958 Current Temperature: 323 Kelvin (50 Celsius) 00:11:07.958 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:07.958 Available Spare: 0% 00:11:07.958 Available Spare Threshold: 0% 00:11:07.958 Life Percentage Used: 0% 00:11:07.958 Data Units Read: 767 00:11:07.958 Data Units Written: 617 00:11:07.958 Host Read Commands: 34793 00:11:07.958 Host Write Commands: 32530 00:11:07.958 Controller Busy Time: 0 minutes 00:11:07.958 Power Cycles: 0 00:11:07.958 Power On Hours: 0 hours 00:11:07.958 Unsafe Shutdowns: 0 00:11:07.958 Unrecoverable Media Errors: 0 00:11:07.958 Lifetime Error Log Entries: 0 00:11:07.958 Warning Temperature Time: 0 minutes 00:11:07.958 Critical Temperature Time: 0 minutes 00:11:07.958 00:11:07.958 Number of Queues 00:11:07.958 ================ 00:11:07.958 Number of I/O Submission Queues: 64 00:11:07.958 Number of I/O Completion Queues: 64 00:11:07.958 00:11:07.958 ZNS Specific Controller Data 00:11:07.958 ============================ 00:11:07.958 Zone Append Size Limit: 0 00:11:07.958 00:11:07.958 00:11:07.958 Active Namespaces 00:11:07.958 ================= 00:11:07.958 Namespace ID:1 00:11:07.958 Error Recovery Timeout: Unlimited 00:11:07.958 Command Set Identifier: NVM (00h) 00:11:07.958 Deallocate: Supported 00:11:07.958 Deallocated/Unwritten Error: Supported 00:11:07.958 Deallocated Read Value: All 0x00 00:11:07.958 Deallocate in Write Zeroes: Not Supported 00:11:07.958 Deallocated Guard Field: 0xFFFF 00:11:07.958 Flush: Supported 00:11:07.958 Reservation: Not Supported 00:11:07.958 Namespace Sharing Capabilities: Private 00:11:07.958 Size (in LBAs): 1310720 (5GiB) 00:11:07.958 Capacity (in LBAs): 1310720 (5GiB) 00:11:07.958 Utilization (in LBAs): 1310720 (5GiB) 00:11:07.958 Thin Provisioning: Not Supported 00:11:07.958 Per-NS Atomic Units: No 00:11:07.958 Maximum Single Source Range Length: 128 00:11:07.958 Maximum Copy Length: 128 00:11:07.958 Maximum Source Range Count: 128 00:11:07.958 NGUID/EUI64 Never Reused: No 00:11:07.958 Namespace Write Protected: No 00:11:07.958 Number of LBA Formats: 8 00:11:07.958 Current LBA Format: LBA Format #04 00:11:07.958 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:07.958 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:07.958 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:07.958 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:07.958 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:07.958 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:07.958 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:07.958 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:07.958 00:11:07.958 18:03:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:07.958 18:03:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:08.217 ===================================================== 00:11:08.217 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:08.218 ===================================================== 00:11:08.218 Controller Capabilities/Features 00:11:08.218 ================================ 00:11:08.218 Vendor ID: 1b36 00:11:08.218 Subsystem Vendor ID: 1af4 00:11:08.218 Serial Number: 12342 00:11:08.218 Model Number: QEMU NVMe Ctrl 00:11:08.218 Firmware Version: 8.0.0 00:11:08.218 Recommended Arb Burst: 6 00:11:08.218 IEEE OUI Identifier: 00 54 52 00:11:08.218 Multi-path I/O 00:11:08.218 May have multiple subsystem ports: No 00:11:08.218 May have multiple controllers: No 00:11:08.218 Associated with SR-IOV VF: No 00:11:08.218 Max Data Transfer Size: 524288 00:11:08.218 Max Number of Namespaces: 256 00:11:08.218 Max Number of I/O Queues: 64 00:11:08.218 NVMe Specification Version (VS): 1.4 00:11:08.218 NVMe Specification Version (Identify): 1.4 00:11:08.218 Maximum Queue Entries: 2048 00:11:08.218 Contiguous Queues Required: Yes 00:11:08.218 Arbitration Mechanisms Supported 00:11:08.218 Weighted Round Robin: Not Supported 00:11:08.218 Vendor Specific: Not Supported 00:11:08.218 Reset Timeout: 7500 ms 00:11:08.218 Doorbell Stride: 4 bytes 00:11:08.218 NVM Subsystem Reset: Not Supported 00:11:08.218 Command Sets Supported 00:11:08.218 NVM Command Set: Supported 00:11:08.218 Boot Partition: Not Supported 00:11:08.218 Memory Page Size Minimum: 4096 bytes 00:11:08.218 Memory Page Size Maximum: 65536 bytes 00:11:08.218 Persistent Memory Region: Not Supported 00:11:08.218 Optional Asynchronous Events Supported 00:11:08.218 Namespace Attribute Notices: Supported 00:11:08.218 Firmware Activation Notices: Not Supported 00:11:08.218 ANA Change Notices: Not Supported 00:11:08.218 PLE Aggregate Log Change Notices: Not Supported 00:11:08.218 LBA Status Info Alert Notices: Not Supported 00:11:08.218 EGE Aggregate Log Change Notices: Not Supported 00:11:08.218 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.218 Zone Descriptor Change Notices: Not Supported 00:11:08.218 Discovery Log Change Notices: Not Supported 00:11:08.218 Controller Attributes 00:11:08.218 128-bit Host Identifier: Not Supported 00:11:08.218 Non-Operational Permissive Mode: Not Supported 00:11:08.218 NVM Sets: Not Supported 00:11:08.218 Read Recovery Levels: Not Supported 00:11:08.218 Endurance Groups: Not Supported 00:11:08.218 Predictable Latency Mode: Not Supported 00:11:08.218 Traffic Based Keep ALive: Not Supported 00:11:08.218 Namespace Granularity: Not Supported 00:11:08.218 SQ Associations: Not Supported 00:11:08.218 UUID List: Not Supported 00:11:08.218 Multi-Domain Subsystem: Not Supported 00:11:08.218 Fixed Capacity Management: Not Supported 00:11:08.218 Variable Capacity Management: Not Supported 00:11:08.218 Delete Endurance Group: Not Supported 00:11:08.218 Delete NVM Set: Not Supported 00:11:08.218 Extended LBA Formats Supported: Supported 00:11:08.218 Flexible Data Placement Supported: Not Supported 00:11:08.218 00:11:08.218 Controller Memory Buffer Support 00:11:08.218 ================================ 00:11:08.218 Supported: No 00:11:08.218 00:11:08.218 Persistent Memory Region Support 00:11:08.218 ================================ 00:11:08.218 Supported: No 00:11:08.218 00:11:08.218 Admin Command Set Attributes 00:11:08.218 ============================ 00:11:08.218 Security Send/Receive: Not Supported 00:11:08.218 Format NVM: Supported 00:11:08.218 Firmware Activate/Download: Not Supported 00:11:08.218 Namespace Management: Supported 00:11:08.218 Device Self-Test: Not Supported 00:11:08.218 Directives: Supported 00:11:08.218 NVMe-MI: Not Supported 00:11:08.218 Virtualization Management: Not Supported 00:11:08.218 Doorbell Buffer Config: Supported 00:11:08.218 Get LBA Status Capability: Not Supported 00:11:08.218 Command & Feature Lockdown Capability: Not Supported 00:11:08.218 Abort Command Limit: 4 00:11:08.218 Async Event Request Limit: 4 00:11:08.218 Number of Firmware Slots: N/A 00:11:08.218 Firmware Slot 1 Read-Only: N/A 00:11:08.218 Firmware Activation Without Reset: N/A 00:11:08.218 Multiple Update Detection Support: N/A 00:11:08.218 Firmware Update Granularity: No Information Provided 00:11:08.218 Per-Namespace SMART Log: Yes 00:11:08.218 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.218 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:08.218 Command Effects Log Page: Supported 00:11:08.218 Get Log Page Extended Data: Supported 00:11:08.218 Telemetry Log Pages: Not Supported 00:11:08.218 Persistent Event Log Pages: Not Supported 00:11:08.218 Supported Log Pages Log Page: May Support 00:11:08.218 Commands Supported & Effects Log Page: Not Supported 00:11:08.218 Feature Identifiers & Effects Log Page:May Support 00:11:08.218 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.218 Data Area 4 for Telemetry Log: Not Supported 00:11:08.218 Error Log Page Entries Supported: 1 00:11:08.218 Keep Alive: Not Supported 00:11:08.218 00:11:08.218 NVM Command Set Attributes 00:11:08.218 ========================== 00:11:08.218 Submission Queue Entry Size 00:11:08.218 Max: 64 00:11:08.218 Min: 64 00:11:08.218 Completion Queue Entry Size 00:11:08.218 Max: 16 00:11:08.218 Min: 16 00:11:08.218 Number of Namespaces: 256 00:11:08.218 Compare Command: Supported 00:11:08.218 Write Uncorrectable Command: Not Supported 00:11:08.218 Dataset Management Command: Supported 00:11:08.218 Write Zeroes Command: Supported 00:11:08.218 Set Features Save Field: Supported 00:11:08.218 Reservations: Not Supported 00:11:08.218 Timestamp: Supported 00:11:08.218 Copy: Supported 00:11:08.218 Volatile Write Cache: Present 00:11:08.218 Atomic Write Unit (Normal): 1 00:11:08.218 Atomic Write Unit (PFail): 1 00:11:08.218 Atomic Compare & Write Unit: 1 00:11:08.218 Fused Compare & Write: Not Supported 00:11:08.218 Scatter-Gather List 00:11:08.218 SGL Command Set: Supported 00:11:08.218 SGL Keyed: Not Supported 00:11:08.218 SGL Bit Bucket Descriptor: Not Supported 00:11:08.218 SGL Metadata Pointer: Not Supported 00:11:08.218 Oversized SGL: Not Supported 00:11:08.218 SGL Metadata Address: Not Supported 00:11:08.218 SGL Offset: Not Supported 00:11:08.218 Transport SGL Data Block: Not Supported 00:11:08.218 Replay Protected Memory Block: Not Supported 00:11:08.218 00:11:08.218 Firmware Slot Information 00:11:08.218 ========================= 00:11:08.218 Active slot: 1 00:11:08.218 Slot 1 Firmware Revision: 1.0 00:11:08.218 00:11:08.218 00:11:08.218 Commands Supported and Effects 00:11:08.218 ============================== 00:11:08.218 Admin Commands 00:11:08.218 -------------- 00:11:08.218 Delete I/O Submission Queue (00h): Supported 00:11:08.218 Create I/O Submission Queue (01h): Supported 00:11:08.218 Get Log Page (02h): Supported 00:11:08.218 Delete I/O Completion Queue (04h): Supported 00:11:08.218 Create I/O Completion Queue (05h): Supported 00:11:08.218 Identify (06h): Supported 00:11:08.218 Abort (08h): Supported 00:11:08.218 Set Features (09h): Supported 00:11:08.218 Get Features (0Ah): Supported 00:11:08.218 Asynchronous Event Request (0Ch): Supported 00:11:08.218 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.218 Directive Send (19h): Supported 00:11:08.218 Directive Receive (1Ah): Supported 00:11:08.218 Virtualization Management (1Ch): Supported 00:11:08.218 Doorbell Buffer Config (7Ch): Supported 00:11:08.218 Format NVM (80h): Supported LBA-Change 00:11:08.218 I/O Commands 00:11:08.218 ------------ 00:11:08.218 Flush (00h): Supported LBA-Change 00:11:08.218 Write (01h): Supported LBA-Change 00:11:08.218 Read (02h): Supported 00:11:08.218 Compare (05h): Supported 00:11:08.218 Write Zeroes (08h): Supported LBA-Change 00:11:08.218 Dataset Management (09h): Supported LBA-Change 00:11:08.218 Unknown (0Ch): Supported 00:11:08.218 Unknown (12h): Supported 00:11:08.218 Copy (19h): Supported LBA-Change 00:11:08.218 Unknown (1Dh): Supported LBA-Change 00:11:08.218 00:11:08.218 Error Log 00:11:08.218 ========= 00:11:08.218 00:11:08.218 Arbitration 00:11:08.218 =========== 00:11:08.218 Arbitration Burst: no limit 00:11:08.218 00:11:08.218 Power Management 00:11:08.218 ================ 00:11:08.218 Number of Power States: 1 00:11:08.218 Current Power State: Power State #0 00:11:08.218 Power State #0: 00:11:08.218 Max Power: 25.00 W 00:11:08.218 Non-Operational State: Operational 00:11:08.218 Entry Latency: 16 microseconds 00:11:08.218 Exit Latency: 4 microseconds 00:11:08.218 Relative Read Throughput: 0 00:11:08.218 Relative Read Latency: 0 00:11:08.218 Relative Write Throughput: 0 00:11:08.218 Relative Write Latency: 0 00:11:08.218 Idle Power: Not Reported 00:11:08.218 Active Power: Not Reported 00:11:08.218 Non-Operational Permissive Mode: Not Supported 00:11:08.218 00:11:08.218 Health Information 00:11:08.218 ================== 00:11:08.218 Critical Warnings: 00:11:08.218 Available Spare Space: OK 00:11:08.218 Temperature: OK 00:11:08.218 Device Reliability: OK 00:11:08.218 Read Only: No 00:11:08.218 Volatile Memory Backup: OK 00:11:08.218 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.218 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.218 Available Spare: 0% 00:11:08.218 Available Spare Threshold: 0% 00:11:08.218 Life Percentage Used: 0% 00:11:08.218 Data Units Read: 2222 00:11:08.218 Data Units Written: 1902 00:11:08.218 Host Read Commands: 102301 00:11:08.218 Host Write Commands: 98071 00:11:08.218 Controller Busy Time: 0 minutes 00:11:08.218 Power Cycles: 0 00:11:08.218 Power On Hours: 0 hours 00:11:08.218 Unsafe Shutdowns: 0 00:11:08.218 Unrecoverable Media Errors: 0 00:11:08.218 Lifetime Error Log Entries: 0 00:11:08.218 Warning Temperature Time: 0 minutes 00:11:08.218 Critical Temperature Time: 0 minutes 00:11:08.218 00:11:08.218 Number of Queues 00:11:08.218 ================ 00:11:08.218 Number of I/O Submission Queues: 64 00:11:08.218 Number of I/O Completion Queues: 64 00:11:08.218 00:11:08.218 ZNS Specific Controller Data 00:11:08.218 ============================ 00:11:08.218 Zone Append Size Limit: 0 00:11:08.218 00:11:08.218 00:11:08.218 Active Namespaces 00:11:08.218 ================= 00:11:08.218 Namespace ID:1 00:11:08.218 Error Recovery Timeout: Unlimited 00:11:08.219 Command Set Identifier: NVM (00h) 00:11:08.219 Deallocate: Supported 00:11:08.219 Deallocated/Unwritten Error: Supported 00:11:08.219 Deallocated Read Value: All 0x00 00:11:08.219 Deallocate in Write Zeroes: Not Supported 00:11:08.219 Deallocated Guard Field: 0xFFFF 00:11:08.219 Flush: Supported 00:11:08.219 Reservation: Not Supported 00:11:08.219 Namespace Sharing Capabilities: Private 00:11:08.219 Size (in LBAs): 1048576 (4GiB) 00:11:08.219 Capacity (in LBAs): 1048576 (4GiB) 00:11:08.219 Utilization (in LBAs): 1048576 (4GiB) 00:11:08.219 Thin Provisioning: Not Supported 00:11:08.219 Per-NS Atomic Units: No 00:11:08.219 Maximum Single Source Range Length: 128 00:11:08.219 Maximum Copy Length: 128 00:11:08.219 Maximum Source Range Count: 128 00:11:08.219 NGUID/EUI64 Never Reused: No 00:11:08.219 Namespace Write Protected: No 00:11:08.219 Number of LBA Formats: 8 00:11:08.219 Current LBA Format: LBA Format #04 00:11:08.219 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.219 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.219 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.219 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.219 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.219 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.219 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.219 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.219 00:11:08.219 Namespace ID:2 00:11:08.219 Error Recovery Timeout: Unlimited 00:11:08.219 Command Set Identifier: NVM (00h) 00:11:08.219 Deallocate: Supported 00:11:08.219 Deallocated/Unwritten Error: Supported 00:11:08.219 Deallocated Read Value: All 0x00 00:11:08.219 Deallocate in Write Zeroes: Not Supported 00:11:08.219 Deallocated Guard Field: 0xFFFF 00:11:08.219 Flush: Supported 00:11:08.219 Reservation: Not Supported 00:11:08.219 Namespace Sharing Capabilities: Private 00:11:08.219 Size (in LBAs): 1048576 (4GiB) 00:11:08.219 Capacity (in LBAs): 1048576 (4GiB) 00:11:08.219 Utilization (in LBAs): 1048576 (4GiB) 00:11:08.219 Thin Provisioning: Not Supported 00:11:08.219 Per-NS Atomic Units: No 00:11:08.219 Maximum Single Source Range Length: 128 00:11:08.219 Maximum Copy Length: 128 00:11:08.219 Maximum Source Range Count: 128 00:11:08.219 NGUID/EUI64 Never Reused: No 00:11:08.219 Namespace Write Protected: No 00:11:08.219 Number of LBA Formats: 8 00:11:08.219 Current LBA Format: LBA Format #04 00:11:08.219 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.219 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.219 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.219 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.219 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.219 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.219 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.219 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.219 00:11:08.219 Namespace ID:3 00:11:08.219 Error Recovery Timeout: Unlimited 00:11:08.219 Command Set Identifier: NVM (00h) 00:11:08.219 Deallocate: Supported 00:11:08.219 Deallocated/Unwritten Error: Supported 00:11:08.219 Deallocated Read Value: All 0x00 00:11:08.219 Deallocate in Write Zeroes: Not Supported 00:11:08.219 Deallocated Guard Field: 0xFFFF 00:11:08.219 Flush: Supported 00:11:08.219 Reservation: Not Supported 00:11:08.219 Namespace Sharing Capabilities: Private 00:11:08.219 Size (in LBAs): 1048576 (4GiB) 00:11:08.219 Capacity (in LBAs): 1048576 (4GiB) 00:11:08.219 Utilization (in LBAs): 1048576 (4GiB) 00:11:08.219 Thin Provisioning: Not Supported 00:11:08.219 Per-NS Atomic Units: No 00:11:08.219 Maximum Single Source Range Length: 128 00:11:08.219 Maximum Copy Length: 128 00:11:08.219 Maximum Source Range Count: 128 00:11:08.219 NGUID/EUI64 Never Reused: No 00:11:08.219 Namespace Write Protected: No 00:11:08.219 Number of LBA Formats: 8 00:11:08.219 Current LBA Format: LBA Format #04 00:11:08.219 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.219 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.219 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.219 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.219 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.219 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.219 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.219 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.219 00:11:08.219 18:03:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:08.219 18:03:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:08.483 ===================================================== 00:11:08.483 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:08.483 ===================================================== 00:11:08.483 Controller Capabilities/Features 00:11:08.483 ================================ 00:11:08.483 Vendor ID: 1b36 00:11:08.483 Subsystem Vendor ID: 1af4 00:11:08.483 Serial Number: 12343 00:11:08.483 Model Number: QEMU NVMe Ctrl 00:11:08.483 Firmware Version: 8.0.0 00:11:08.483 Recommended Arb Burst: 6 00:11:08.483 IEEE OUI Identifier: 00 54 52 00:11:08.483 Multi-path I/O 00:11:08.483 May have multiple subsystem ports: No 00:11:08.483 May have multiple controllers: Yes 00:11:08.483 Associated with SR-IOV VF: No 00:11:08.483 Max Data Transfer Size: 524288 00:11:08.483 Max Number of Namespaces: 256 00:11:08.483 Max Number of I/O Queues: 64 00:11:08.483 NVMe Specification Version (VS): 1.4 00:11:08.483 NVMe Specification Version (Identify): 1.4 00:11:08.483 Maximum Queue Entries: 2048 00:11:08.483 Contiguous Queues Required: Yes 00:11:08.483 Arbitration Mechanisms Supported 00:11:08.483 Weighted Round Robin: Not Supported 00:11:08.483 Vendor Specific: Not Supported 00:11:08.483 Reset Timeout: 7500 ms 00:11:08.483 Doorbell Stride: 4 bytes 00:11:08.483 NVM Subsystem Reset: Not Supported 00:11:08.483 Command Sets Supported 00:11:08.483 NVM Command Set: Supported 00:11:08.483 Boot Partition: Not Supported 00:11:08.483 Memory Page Size Minimum: 4096 bytes 00:11:08.483 Memory Page Size Maximum: 65536 bytes 00:11:08.483 Persistent Memory Region: Not Supported 00:11:08.483 Optional Asynchronous Events Supported 00:11:08.483 Namespace Attribute Notices: Supported 00:11:08.483 Firmware Activation Notices: Not Supported 00:11:08.483 ANA Change Notices: Not Supported 00:11:08.483 PLE Aggregate Log Change Notices: Not Supported 00:11:08.483 LBA Status Info Alert Notices: Not Supported 00:11:08.483 EGE Aggregate Log Change Notices: Not Supported 00:11:08.483 Normal NVM Subsystem Shutdown event: Not Supported 00:11:08.483 Zone Descriptor Change Notices: Not Supported 00:11:08.483 Discovery Log Change Notices: Not Supported 00:11:08.483 Controller Attributes 00:11:08.483 128-bit Host Identifier: Not Supported 00:11:08.483 Non-Operational Permissive Mode: Not Supported 00:11:08.483 NVM Sets: Not Supported 00:11:08.483 Read Recovery Levels: Not Supported 00:11:08.483 Endurance Groups: Supported 00:11:08.483 Predictable Latency Mode: Not Supported 00:11:08.483 Traffic Based Keep ALive: Not Supported 00:11:08.483 Namespace Granularity: Not Supported 00:11:08.483 SQ Associations: Not Supported 00:11:08.483 UUID List: Not Supported 00:11:08.483 Multi-Domain Subsystem: Not Supported 00:11:08.483 Fixed Capacity Management: Not Supported 00:11:08.483 Variable Capacity Management: Not Supported 00:11:08.483 Delete Endurance Group: Not Supported 00:11:08.483 Delete NVM Set: Not Supported 00:11:08.483 Extended LBA Formats Supported: Supported 00:11:08.483 Flexible Data Placement Supported: Supported 00:11:08.483 00:11:08.483 Controller Memory Buffer Support 00:11:08.483 ================================ 00:11:08.483 Supported: No 00:11:08.483 00:11:08.483 Persistent Memory Region Support 00:11:08.483 ================================ 00:11:08.483 Supported: No 00:11:08.483 00:11:08.483 Admin Command Set Attributes 00:11:08.483 ============================ 00:11:08.483 Security Send/Receive: Not Supported 00:11:08.483 Format NVM: Supported 00:11:08.483 Firmware Activate/Download: Not Supported 00:11:08.483 Namespace Management: Supported 00:11:08.483 Device Self-Test: Not Supported 00:11:08.483 Directives: Supported 00:11:08.483 NVMe-MI: Not Supported 00:11:08.483 Virtualization Management: Not Supported 00:11:08.483 Doorbell Buffer Config: Supported 00:11:08.483 Get LBA Status Capability: Not Supported 00:11:08.483 Command & Feature Lockdown Capability: Not Supported 00:11:08.483 Abort Command Limit: 4 00:11:08.483 Async Event Request Limit: 4 00:11:08.483 Number of Firmware Slots: N/A 00:11:08.483 Firmware Slot 1 Read-Only: N/A 00:11:08.483 Firmware Activation Without Reset: N/A 00:11:08.483 Multiple Update Detection Support: N/A 00:11:08.483 Firmware Update Granularity: No Information Provided 00:11:08.483 Per-Namespace SMART Log: Yes 00:11:08.483 Asymmetric Namespace Access Log Page: Not Supported 00:11:08.483 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:08.483 Command Effects Log Page: Supported 00:11:08.483 Get Log Page Extended Data: Supported 00:11:08.483 Telemetry Log Pages: Not Supported 00:11:08.483 Persistent Event Log Pages: Not Supported 00:11:08.483 Supported Log Pages Log Page: May Support 00:11:08.483 Commands Supported & Effects Log Page: Not Supported 00:11:08.483 Feature Identifiers & Effects Log Page:May Support 00:11:08.483 NVMe-MI Commands & Effects Log Page: May Support 00:11:08.483 Data Area 4 for Telemetry Log: Not Supported 00:11:08.483 Error Log Page Entries Supported: 1 00:11:08.483 Keep Alive: Not Supported 00:11:08.483 00:11:08.483 NVM Command Set Attributes 00:11:08.483 ========================== 00:11:08.483 Submission Queue Entry Size 00:11:08.483 Max: 64 00:11:08.483 Min: 64 00:11:08.483 Completion Queue Entry Size 00:11:08.483 Max: 16 00:11:08.483 Min: 16 00:11:08.483 Number of Namespaces: 256 00:11:08.483 Compare Command: Supported 00:11:08.483 Write Uncorrectable Command: Not Supported 00:11:08.484 Dataset Management Command: Supported 00:11:08.484 Write Zeroes Command: Supported 00:11:08.484 Set Features Save Field: Supported 00:11:08.484 Reservations: Not Supported 00:11:08.484 Timestamp: Supported 00:11:08.484 Copy: Supported 00:11:08.484 Volatile Write Cache: Present 00:11:08.484 Atomic Write Unit (Normal): 1 00:11:08.484 Atomic Write Unit (PFail): 1 00:11:08.484 Atomic Compare & Write Unit: 1 00:11:08.484 Fused Compare & Write: Not Supported 00:11:08.484 Scatter-Gather List 00:11:08.484 SGL Command Set: Supported 00:11:08.484 SGL Keyed: Not Supported 00:11:08.484 SGL Bit Bucket Descriptor: Not Supported 00:11:08.484 SGL Metadata Pointer: Not Supported 00:11:08.484 Oversized SGL: Not Supported 00:11:08.484 SGL Metadata Address: Not Supported 00:11:08.484 SGL Offset: Not Supported 00:11:08.484 Transport SGL Data Block: Not Supported 00:11:08.484 Replay Protected Memory Block: Not Supported 00:11:08.484 00:11:08.484 Firmware Slot Information 00:11:08.484 ========================= 00:11:08.484 Active slot: 1 00:11:08.484 Slot 1 Firmware Revision: 1.0 00:11:08.484 00:11:08.484 00:11:08.484 Commands Supported and Effects 00:11:08.484 ============================== 00:11:08.484 Admin Commands 00:11:08.484 -------------- 00:11:08.484 Delete I/O Submission Queue (00h): Supported 00:11:08.484 Create I/O Submission Queue (01h): Supported 00:11:08.484 Get Log Page (02h): Supported 00:11:08.484 Delete I/O Completion Queue (04h): Supported 00:11:08.484 Create I/O Completion Queue (05h): Supported 00:11:08.484 Identify (06h): Supported 00:11:08.484 Abort (08h): Supported 00:11:08.484 Set Features (09h): Supported 00:11:08.484 Get Features (0Ah): Supported 00:11:08.484 Asynchronous Event Request (0Ch): Supported 00:11:08.484 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:08.484 Directive Send (19h): Supported 00:11:08.484 Directive Receive (1Ah): Supported 00:11:08.484 Virtualization Management (1Ch): Supported 00:11:08.484 Doorbell Buffer Config (7Ch): Supported 00:11:08.484 Format NVM (80h): Supported LBA-Change 00:11:08.484 I/O Commands 00:11:08.484 ------------ 00:11:08.484 Flush (00h): Supported LBA-Change 00:11:08.484 Write (01h): Supported LBA-Change 00:11:08.484 Read (02h): Supported 00:11:08.484 Compare (05h): Supported 00:11:08.484 Write Zeroes (08h): Supported LBA-Change 00:11:08.484 Dataset Management (09h): Supported LBA-Change 00:11:08.484 Unknown (0Ch): Supported 00:11:08.484 Unknown (12h): Supported 00:11:08.484 Copy (19h): Supported LBA-Change 00:11:08.484 Unknown (1Dh): Supported LBA-Change 00:11:08.484 00:11:08.484 Error Log 00:11:08.484 ========= 00:11:08.484 00:11:08.484 Arbitration 00:11:08.484 =========== 00:11:08.484 Arbitration Burst: no limit 00:11:08.484 00:11:08.484 Power Management 00:11:08.484 ================ 00:11:08.484 Number of Power States: 1 00:11:08.484 Current Power State: Power State #0 00:11:08.484 Power State #0: 00:11:08.484 Max Power: 25.00 W 00:11:08.484 Non-Operational State: Operational 00:11:08.484 Entry Latency: 16 microseconds 00:11:08.484 Exit Latency: 4 microseconds 00:11:08.484 Relative Read Throughput: 0 00:11:08.484 Relative Read Latency: 0 00:11:08.484 Relative Write Throughput: 0 00:11:08.484 Relative Write Latency: 0 00:11:08.484 Idle Power: Not Reported 00:11:08.484 Active Power: Not Reported 00:11:08.484 Non-Operational Permissive Mode: Not Supported 00:11:08.484 00:11:08.484 Health Information 00:11:08.484 ================== 00:11:08.484 Critical Warnings: 00:11:08.484 Available Spare Space: OK 00:11:08.484 Temperature: OK 00:11:08.484 Device Reliability: OK 00:11:08.484 Read Only: No 00:11:08.484 Volatile Memory Backup: OK 00:11:08.484 Current Temperature: 323 Kelvin (50 Celsius) 00:11:08.484 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:08.484 Available Spare: 0% 00:11:08.484 Available Spare Threshold: 0% 00:11:08.484 Life Percentage Used: 0% 00:11:08.484 Data Units Read: 792 00:11:08.484 Data Units Written: 686 00:11:08.484 Host Read Commands: 34661 00:11:08.484 Host Write Commands: 33251 00:11:08.484 Controller Busy Time: 0 minutes 00:11:08.484 Power Cycles: 0 00:11:08.484 Power On Hours: 0 hours 00:11:08.484 Unsafe Shutdowns: 0 00:11:08.484 Unrecoverable Media Errors: 0 00:11:08.484 Lifetime Error Log Entries: 0 00:11:08.484 Warning Temperature Time: 0 minutes 00:11:08.484 Critical Temperature Time: 0 minutes 00:11:08.484 00:11:08.484 Number of Queues 00:11:08.484 ================ 00:11:08.484 Number of I/O Submission Queues: 64 00:11:08.484 Number of I/O Completion Queues: 64 00:11:08.484 00:11:08.484 ZNS Specific Controller Data 00:11:08.484 ============================ 00:11:08.484 Zone Append Size Limit: 0 00:11:08.484 00:11:08.484 00:11:08.484 Active Namespaces 00:11:08.484 ================= 00:11:08.484 Namespace ID:1 00:11:08.484 Error Recovery Timeout: Unlimited 00:11:08.484 Command Set Identifier: NVM (00h) 00:11:08.484 Deallocate: Supported 00:11:08.484 Deallocated/Unwritten Error: Supported 00:11:08.484 Deallocated Read Value: All 0x00 00:11:08.484 Deallocate in Write Zeroes: Not Supported 00:11:08.484 Deallocated Guard Field: 0xFFFF 00:11:08.484 Flush: Supported 00:11:08.484 Reservation: Not Supported 00:11:08.484 Namespace Sharing Capabilities: Multiple Controllers 00:11:08.484 Size (in LBAs): 262144 (1GiB) 00:11:08.484 Capacity (in LBAs): 262144 (1GiB) 00:11:08.484 Utilization (in LBAs): 262144 (1GiB) 00:11:08.484 Thin Provisioning: Not Supported 00:11:08.484 Per-NS Atomic Units: No 00:11:08.484 Maximum Single Source Range Length: 128 00:11:08.484 Maximum Copy Length: 128 00:11:08.484 Maximum Source Range Count: 128 00:11:08.484 NGUID/EUI64 Never Reused: No 00:11:08.484 Namespace Write Protected: No 00:11:08.484 Endurance group ID: 1 00:11:08.484 Number of LBA Formats: 8 00:11:08.484 Current LBA Format: LBA Format #04 00:11:08.484 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:08.484 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:08.484 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:08.484 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:08.484 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:08.484 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:08.484 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:08.484 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:08.484 00:11:08.484 Get Feature FDP: 00:11:08.484 ================ 00:11:08.484 Enabled: Yes 00:11:08.484 FDP configuration index: 0 00:11:08.484 00:11:08.484 FDP configurations log page 00:11:08.484 =========================== 00:11:08.484 Number of FDP configurations: 1 00:11:08.484 Version: 0 00:11:08.484 Size: 112 00:11:08.484 FDP Configuration Descriptor: 0 00:11:08.484 Descriptor Size: 96 00:11:08.484 Reclaim Group Identifier format: 2 00:11:08.484 FDP Volatile Write Cache: Not Present 00:11:08.484 FDP Configuration: Valid 00:11:08.484 Vendor Specific Size: 0 00:11:08.484 Number of Reclaim Groups: 2 00:11:08.484 Number of Recalim Unit Handles: 8 00:11:08.484 Max Placement Identifiers: 128 00:11:08.484 Number of Namespaces Suppprted: 256 00:11:08.484 Reclaim unit Nominal Size: 6000000 bytes 00:11:08.484 Estimated Reclaim Unit Time Limit: Not Reported 00:11:08.484 RUH Desc #000: RUH Type: Initially Isolated 00:11:08.485 RUH Desc #001: RUH Type: Initially Isolated 00:11:08.485 RUH Desc #002: RUH Type: Initially Isolated 00:11:08.485 RUH Desc #003: RUH Type: Initially Isolated 00:11:08.485 RUH Desc #004: RUH Type: Initially Isolated 00:11:08.485 RUH Desc #005: RUH Type: Initially Isolated 00:11:08.485 RUH Desc #006: RUH Type: Initially Isolated 00:11:08.485 RUH Desc #007: RUH Type: Initially Isolated 00:11:08.485 00:11:08.485 FDP reclaim unit handle usage log page 00:11:08.774 ====================================== 00:11:08.774 Number of Reclaim Unit Handles: 8 00:11:08.774 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:08.774 RUH Usage Desc #001: RUH Attributes: Unused 00:11:08.774 RUH Usage Desc #002: RUH Attributes: Unused 00:11:08.774 RUH Usage Desc #003: RUH Attributes: Unused 00:11:08.774 RUH Usage Desc #004: RUH Attributes: Unused 00:11:08.774 RUH Usage Desc #005: RUH Attributes: Unused 00:11:08.774 RUH Usage Desc #006: RUH Attributes: Unused 00:11:08.774 RUH Usage Desc #007: RUH Attributes: Unused 00:11:08.774 00:11:08.774 FDP statistics log page 00:11:08.774 ======================= 00:11:08.774 Host bytes with metadata written: 430743552 00:11:08.774 Media bytes with metadata written: 430788608 00:11:08.774 Media bytes erased: 0 00:11:08.774 00:11:08.774 FDP events log page 00:11:08.774 =================== 00:11:08.774 Number of FDP events: 0 00:11:08.774 00:11:08.774 00:11:08.774 real 0m1.617s 00:11:08.774 user 0m0.638s 00:11:08.774 sys 0m0.761s 00:11:08.774 18:03:01 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:08.774 18:03:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:08.774 ************************************ 00:11:08.774 END TEST nvme_identify 00:11:08.774 ************************************ 00:11:08.774 18:03:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:08.774 18:03:01 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:08.774 18:03:01 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.774 18:03:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:08.774 ************************************ 00:11:08.774 START TEST nvme_perf 00:11:08.774 ************************************ 00:11:08.774 18:03:01 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:11:08.774 18:03:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:10.152 Initializing NVMe Controllers 00:11:10.152 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:10.152 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:10.152 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:10.152 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:10.152 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:10.152 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:10.152 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:10.152 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:10.152 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:10.152 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:10.152 Initialization complete. Launching workers. 00:11:10.152 ======================================================== 00:11:10.152 Latency(us) 00:11:10.152 Device Information : IOPS MiB/s Average min max 00:11:10.152 PCIE (0000:00:10.0) NSID 1 from core 0: 13067.89 153.14 9815.98 7862.29 41347.86 00:11:10.152 PCIE (0000:00:11.0) NSID 1 from core 0: 13067.89 153.14 9793.77 7962.16 38639.72 00:11:10.152 PCIE (0000:00:13.0) NSID 1 from core 0: 13067.89 153.14 9769.36 7823.71 36729.25 00:11:10.152 PCIE (0000:00:12.0) NSID 1 from core 0: 13067.89 153.14 9744.56 7802.78 34008.12 00:11:10.152 PCIE (0000:00:12.0) NSID 2 from core 0: 13067.89 153.14 9719.93 7859.51 31260.23 00:11:10.152 PCIE (0000:00:12.0) NSID 3 from core 0: 13067.89 153.14 9695.05 7914.58 28540.01 00:11:10.152 ======================================================== 00:11:10.152 Total : 78407.31 918.84 9756.44 7802.78 41347.86 00:11:10.152 00:11:10.152 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:10.152 ================================================================================= 00:11:10.152 1.00000% : 8162.211us 00:11:10.152 10.00000% : 8519.680us 00:11:10.152 25.00000% : 8877.149us 00:11:10.152 50.00000% : 9353.775us 00:11:10.152 75.00000% : 10009.135us 00:11:10.152 90.00000% : 11021.964us 00:11:10.152 95.00000% : 12153.949us 00:11:10.152 98.00000% : 13643.404us 00:11:10.152 99.00000% : 15371.171us 00:11:10.152 99.50000% : 32410.531us 00:11:10.152 99.90000% : 40989.789us 00:11:10.152 99.99000% : 41466.415us 00:11:10.152 99.99900% : 41466.415us 00:11:10.152 99.99990% : 41466.415us 00:11:10.152 99.99999% : 41466.415us 00:11:10.152 00:11:10.153 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:10.153 ================================================================================= 00:11:10.153 1.00000% : 8221.789us 00:11:10.153 10.00000% : 8579.258us 00:11:10.153 25.00000% : 8877.149us 00:11:10.153 50.00000% : 9294.196us 00:11:10.153 75.00000% : 10068.713us 00:11:10.153 90.00000% : 10962.385us 00:11:10.153 95.00000% : 12213.527us 00:11:10.153 98.00000% : 13107.200us 00:11:10.153 99.00000% : 15192.436us 00:11:10.153 99.50000% : 30384.873us 00:11:10.153 99.90000% : 38130.036us 00:11:10.153 99.99000% : 38606.662us 00:11:10.153 99.99900% : 38844.975us 00:11:10.153 99.99990% : 38844.975us 00:11:10.153 99.99999% : 38844.975us 00:11:10.153 00:11:10.153 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:10.153 ================================================================================= 00:11:10.153 1.00000% : 8281.367us 00:11:10.153 10.00000% : 8579.258us 00:11:10.153 25.00000% : 8877.149us 00:11:10.153 50.00000% : 9294.196us 00:11:10.153 75.00000% : 10068.713us 00:11:10.153 90.00000% : 10962.385us 00:11:10.153 95.00000% : 12153.949us 00:11:10.153 98.00000% : 13226.356us 00:11:10.153 99.00000% : 15073.280us 00:11:10.153 99.50000% : 28478.371us 00:11:10.153 99.90000% : 36223.535us 00:11:10.153 99.99000% : 36700.160us 00:11:10.153 99.99900% : 36938.473us 00:11:10.153 99.99990% : 36938.473us 00:11:10.153 99.99999% : 36938.473us 00:11:10.153 00:11:10.153 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:10.153 ================================================================================= 00:11:10.153 1.00000% : 8281.367us 00:11:10.153 10.00000% : 8579.258us 00:11:10.153 25.00000% : 8877.149us 00:11:10.153 50.00000% : 9294.196us 00:11:10.153 75.00000% : 10068.713us 00:11:10.153 90.00000% : 10962.385us 00:11:10.153 95.00000% : 12094.371us 00:11:10.153 98.00000% : 13405.091us 00:11:10.153 99.00000% : 14894.545us 00:11:10.153 99.50000% : 25737.775us 00:11:10.153 99.90000% : 33602.095us 00:11:10.153 99.99000% : 34078.720us 00:11:10.153 99.99900% : 34078.720us 00:11:10.153 99.99990% : 34078.720us 00:11:10.153 99.99999% : 34078.720us 00:11:10.153 00:11:10.153 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:10.153 ================================================================================= 00:11:10.153 1.00000% : 8281.367us 00:11:10.153 10.00000% : 8579.258us 00:11:10.153 25.00000% : 8877.149us 00:11:10.153 50.00000% : 9294.196us 00:11:10.153 75.00000% : 10068.713us 00:11:10.153 90.00000% : 10962.385us 00:11:10.153 95.00000% : 12094.371us 00:11:10.153 98.00000% : 13643.404us 00:11:10.153 99.00000% : 14775.389us 00:11:10.153 99.50000% : 22997.178us 00:11:10.153 99.90000% : 30980.655us 00:11:10.153 99.99000% : 31457.280us 00:11:10.153 99.99900% : 31457.280us 00:11:10.153 99.99990% : 31457.280us 00:11:10.153 99.99999% : 31457.280us 00:11:10.153 00:11:10.153 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:10.153 ================================================================================= 00:11:10.153 1.00000% : 8281.367us 00:11:10.153 10.00000% : 8579.258us 00:11:10.153 25.00000% : 8877.149us 00:11:10.153 50.00000% : 9294.196us 00:11:10.153 75.00000% : 10009.135us 00:11:10.153 90.00000% : 10962.385us 00:11:10.153 95.00000% : 12094.371us 00:11:10.153 98.00000% : 13702.982us 00:11:10.153 99.00000% : 15073.280us 00:11:10.153 99.50000% : 20256.582us 00:11:10.153 99.90000% : 28120.902us 00:11:10.153 99.99000% : 28597.527us 00:11:10.153 99.99900% : 28597.527us 00:11:10.153 99.99990% : 28597.527us 00:11:10.153 99.99999% : 28597.527us 00:11:10.153 00:11:10.153 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:10.153 ============================================================================== 00:11:10.153 Range in us Cumulative IO count 00:11:10.153 7804.742 - 7864.320: 0.0076% ( 1) 00:11:10.153 7864.320 - 7923.898: 0.0457% ( 5) 00:11:10.153 7923.898 - 7983.476: 0.1448% ( 13) 00:11:10.153 7983.476 - 8043.055: 0.3049% ( 21) 00:11:10.153 8043.055 - 8102.633: 0.6479% ( 45) 00:11:10.153 8102.633 - 8162.211: 1.1585% ( 67) 00:11:10.153 8162.211 - 8221.789: 1.9131% ( 99) 00:11:10.153 8221.789 - 8281.367: 3.1021% ( 156) 00:11:10.153 8281.367 - 8340.945: 4.7180% ( 212) 00:11:10.153 8340.945 - 8400.524: 6.5930% ( 246) 00:11:10.153 8400.524 - 8460.102: 8.8110% ( 291) 00:11:10.153 8460.102 - 8519.680: 11.0976% ( 300) 00:11:10.153 8519.680 - 8579.258: 13.6280% ( 332) 00:11:10.153 8579.258 - 8638.836: 16.2881% ( 349) 00:11:10.153 8638.836 - 8698.415: 19.0396% ( 361) 00:11:10.153 8698.415 - 8757.993: 21.9284% ( 379) 00:11:10.153 8757.993 - 8817.571: 24.8247% ( 380) 00:11:10.153 8817.571 - 8877.149: 27.8354% ( 395) 00:11:10.153 8877.149 - 8936.727: 30.7851% ( 387) 00:11:10.153 8936.727 - 8996.305: 33.8110% ( 397) 00:11:10.153 8996.305 - 9055.884: 36.8826% ( 403) 00:11:10.153 9055.884 - 9115.462: 39.9238% ( 399) 00:11:10.153 9115.462 - 9175.040: 42.9345% ( 395) 00:11:10.153 9175.040 - 9234.618: 46.0823% ( 413) 00:11:10.153 9234.618 - 9294.196: 49.0625% ( 391) 00:11:10.153 9294.196 - 9353.775: 52.0960% ( 398) 00:11:10.153 9353.775 - 9413.353: 54.9619% ( 376) 00:11:10.153 9413.353 - 9472.931: 57.6982% ( 359) 00:11:10.153 9472.931 - 9532.509: 60.1982% ( 328) 00:11:10.153 9532.509 - 9592.087: 62.7287% ( 332) 00:11:10.153 9592.087 - 9651.665: 65.0686% ( 307) 00:11:10.153 9651.665 - 9711.244: 67.2256% ( 283) 00:11:10.153 9711.244 - 9770.822: 69.2912% ( 271) 00:11:10.153 9770.822 - 9830.400: 70.9756% ( 221) 00:11:10.153 9830.400 - 9889.978: 72.5229% ( 203) 00:11:10.153 9889.978 - 9949.556: 73.9558% ( 188) 00:11:10.153 9949.556 - 10009.135: 75.2744% ( 173) 00:11:10.153 10009.135 - 10068.713: 76.4405% ( 153) 00:11:10.153 10068.713 - 10128.291: 77.5076% ( 140) 00:11:10.153 10128.291 - 10187.869: 78.5899% ( 142) 00:11:10.153 10187.869 - 10247.447: 79.5655% ( 128) 00:11:10.153 10247.447 - 10307.025: 80.6021% ( 136) 00:11:10.153 10307.025 - 10366.604: 81.6845% ( 142) 00:11:10.153 10366.604 - 10426.182: 82.7210% ( 136) 00:11:10.153 10426.182 - 10485.760: 83.6814% ( 126) 00:11:10.153 10485.760 - 10545.338: 84.6341% ( 125) 00:11:10.153 10545.338 - 10604.916: 85.5640% ( 122) 00:11:10.153 10604.916 - 10664.495: 86.3720% ( 106) 00:11:10.153 10664.495 - 10724.073: 87.1875% ( 107) 00:11:10.153 10724.073 - 10783.651: 87.8887% ( 92) 00:11:10.153 10783.651 - 10843.229: 88.5595% ( 88) 00:11:10.153 10843.229 - 10902.807: 89.1311% ( 75) 00:11:10.153 10902.807 - 10962.385: 89.6799% ( 72) 00:11:10.153 10962.385 - 11021.964: 90.1524% ( 62) 00:11:10.153 11021.964 - 11081.542: 90.5564% ( 53) 00:11:10.153 11081.542 - 11141.120: 90.8994% ( 45) 00:11:10.153 11141.120 - 11200.698: 91.2195% ( 42) 00:11:10.153 11200.698 - 11260.276: 91.5396% ( 42) 00:11:10.153 11260.276 - 11319.855: 91.7378% ( 26) 00:11:10.153 11319.855 - 11379.433: 91.9360% ( 26) 00:11:10.153 11379.433 - 11439.011: 92.1494% ( 28) 00:11:10.153 11439.011 - 11498.589: 92.3552% ( 27) 00:11:10.153 11498.589 - 11558.167: 92.5686% ( 28) 00:11:10.153 11558.167 - 11617.745: 92.8049% ( 31) 00:11:10.153 11617.745 - 11677.324: 93.0564% ( 33) 00:11:10.153 11677.324 - 11736.902: 93.3003% ( 32) 00:11:10.153 11736.902 - 11796.480: 93.5976% ( 39) 00:11:10.153 11796.480 - 11856.058: 93.8491% ( 33) 00:11:10.153 11856.058 - 11915.636: 94.1082% ( 34) 00:11:10.153 11915.636 - 11975.215: 94.3521% ( 32) 00:11:10.153 11975.215 - 12034.793: 94.5884% ( 31) 00:11:10.153 12034.793 - 12094.371: 94.8399% ( 33) 00:11:10.153 12094.371 - 12153.949: 95.0457% ( 27) 00:11:10.153 12153.949 - 12213.527: 95.2210% ( 23) 00:11:10.153 12213.527 - 12273.105: 95.4268% ( 27) 00:11:10.153 12273.105 - 12332.684: 95.6174% ( 25) 00:11:10.153 12332.684 - 12392.262: 95.8155% ( 26) 00:11:10.153 12392.262 - 12451.840: 95.9832% ( 22) 00:11:10.153 12451.840 - 12511.418: 96.1966% ( 28) 00:11:10.153 12511.418 - 12570.996: 96.3262% ( 17) 00:11:10.153 12570.996 - 12630.575: 96.4787% ( 20) 00:11:10.153 12630.575 - 12690.153: 96.6387% ( 21) 00:11:10.153 12690.153 - 12749.731: 96.7835% ( 19) 00:11:10.153 12749.731 - 12809.309: 96.9131% ( 17) 00:11:10.153 12809.309 - 12868.887: 97.0122% ( 13) 00:11:10.154 12868.887 - 12928.465: 97.1494% ( 18) 00:11:10.154 12928.465 - 12988.044: 97.2713% ( 16) 00:11:10.154 12988.044 - 13047.622: 97.4238% ( 20) 00:11:10.154 13047.622 - 13107.200: 97.5381% ( 15) 00:11:10.154 13107.200 - 13166.778: 97.6220% ( 11) 00:11:10.154 13166.778 - 13226.356: 97.6982% ( 10) 00:11:10.154 13226.356 - 13285.935: 97.7363% ( 5) 00:11:10.154 13285.935 - 13345.513: 97.7744% ( 5) 00:11:10.154 13345.513 - 13405.091: 97.8277% ( 7) 00:11:10.154 13405.091 - 13464.669: 97.8659% ( 5) 00:11:10.154 13464.669 - 13524.247: 97.9192% ( 7) 00:11:10.154 13524.247 - 13583.825: 97.9726% ( 7) 00:11:10.154 13583.825 - 13643.404: 98.0183% ( 6) 00:11:10.154 13643.404 - 13702.982: 98.0488% ( 4) 00:11:10.154 13702.982 - 13762.560: 98.0793% ( 4) 00:11:10.154 13762.560 - 13822.138: 98.1250% ( 6) 00:11:10.154 13822.138 - 13881.716: 98.1707% ( 6) 00:11:10.154 13881.716 - 13941.295: 98.2241% ( 7) 00:11:10.154 13941.295 - 14000.873: 98.2927% ( 9) 00:11:10.154 14000.873 - 14060.451: 98.3460% ( 7) 00:11:10.154 14060.451 - 14120.029: 98.3994% ( 7) 00:11:10.154 14120.029 - 14179.607: 98.4527% ( 7) 00:11:10.154 14179.607 - 14239.185: 98.5061% ( 7) 00:11:10.154 14239.185 - 14298.764: 98.5747% ( 9) 00:11:10.154 14298.764 - 14358.342: 98.6357% ( 8) 00:11:10.154 14358.342 - 14417.920: 98.6814% ( 6) 00:11:10.154 14417.920 - 14477.498: 98.7119% ( 4) 00:11:10.154 14477.498 - 14537.076: 98.7652% ( 7) 00:11:10.154 14537.076 - 14596.655: 98.7881% ( 3) 00:11:10.154 14596.655 - 14656.233: 98.8110% ( 3) 00:11:10.154 14656.233 - 14715.811: 98.8262% ( 2) 00:11:10.154 14715.811 - 14775.389: 98.8415% ( 2) 00:11:10.154 14775.389 - 14834.967: 98.8567% ( 2) 00:11:10.154 14834.967 - 14894.545: 98.8720% ( 2) 00:11:10.154 14894.545 - 14954.124: 98.8948% ( 3) 00:11:10.154 14954.124 - 15013.702: 98.9101% ( 2) 00:11:10.154 15013.702 - 15073.280: 98.9329% ( 3) 00:11:10.154 15073.280 - 15132.858: 98.9482% ( 2) 00:11:10.154 15132.858 - 15192.436: 98.9558% ( 1) 00:11:10.154 15192.436 - 15252.015: 98.9863% ( 4) 00:11:10.154 15252.015 - 15371.171: 99.0244% ( 5) 00:11:10.154 29550.778 - 29669.935: 99.0320% ( 1) 00:11:10.154 29669.935 - 29789.091: 99.0625% ( 4) 00:11:10.154 29789.091 - 29908.247: 99.0777% ( 2) 00:11:10.154 29908.247 - 30027.404: 99.1006% ( 3) 00:11:10.154 30027.404 - 30146.560: 99.1235% ( 3) 00:11:10.154 30146.560 - 30265.716: 99.1463% ( 3) 00:11:10.154 30265.716 - 30384.873: 99.1692% ( 3) 00:11:10.154 30384.873 - 30504.029: 99.1921% ( 3) 00:11:10.154 30504.029 - 30742.342: 99.2226% ( 4) 00:11:10.154 30742.342 - 30980.655: 99.2683% ( 6) 00:11:10.154 30980.655 - 31218.967: 99.3216% ( 7) 00:11:10.154 31218.967 - 31457.280: 99.3674% ( 6) 00:11:10.154 31457.280 - 31695.593: 99.4055% ( 5) 00:11:10.154 31695.593 - 31933.905: 99.4588% ( 7) 00:11:10.154 31933.905 - 32172.218: 99.4970% ( 5) 00:11:10.154 32172.218 - 32410.531: 99.5122% ( 2) 00:11:10.154 38368.349 - 38606.662: 99.5198% ( 1) 00:11:10.154 38606.662 - 38844.975: 99.5655% ( 6) 00:11:10.154 38844.975 - 39083.287: 99.6037% ( 5) 00:11:10.154 39083.287 - 39321.600: 99.6418% ( 5) 00:11:10.154 39321.600 - 39559.913: 99.6799% ( 5) 00:11:10.154 39559.913 - 39798.225: 99.7256% ( 6) 00:11:10.154 39798.225 - 40036.538: 99.7637% ( 5) 00:11:10.154 40036.538 - 40274.851: 99.8018% ( 5) 00:11:10.154 40274.851 - 40513.164: 99.8476% ( 6) 00:11:10.154 40513.164 - 40751.476: 99.8933% ( 6) 00:11:10.154 40751.476 - 40989.789: 99.9314% ( 5) 00:11:10.154 40989.789 - 41228.102: 99.9771% ( 6) 00:11:10.154 41228.102 - 41466.415: 100.0000% ( 3) 00:11:10.154 00:11:10.154 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:10.154 ============================================================================== 00:11:10.154 Range in us Cumulative IO count 00:11:10.154 7923.898 - 7983.476: 0.0305% ( 4) 00:11:10.154 7983.476 - 8043.055: 0.1296% ( 13) 00:11:10.154 8043.055 - 8102.633: 0.2973% ( 22) 00:11:10.154 8102.633 - 8162.211: 0.5869% ( 38) 00:11:10.154 8162.211 - 8221.789: 1.0747% ( 64) 00:11:10.154 8221.789 - 8281.367: 1.6692% ( 78) 00:11:10.154 8281.367 - 8340.945: 2.5838% ( 120) 00:11:10.154 8340.945 - 8400.524: 4.1616% ( 207) 00:11:10.154 8400.524 - 8460.102: 6.0671% ( 250) 00:11:10.154 8460.102 - 8519.680: 8.3689% ( 302) 00:11:10.154 8519.680 - 8579.258: 10.9680% ( 341) 00:11:10.154 8579.258 - 8638.836: 13.8415% ( 377) 00:11:10.154 8638.836 - 8698.415: 17.0655% ( 423) 00:11:10.154 8698.415 - 8757.993: 20.4345% ( 442) 00:11:10.154 8757.993 - 8817.571: 23.7652% ( 437) 00:11:10.154 8817.571 - 8877.149: 27.1875% ( 449) 00:11:10.154 8877.149 - 8936.727: 30.5945% ( 447) 00:11:10.154 8936.727 - 8996.305: 34.0396% ( 452) 00:11:10.154 8996.305 - 9055.884: 37.3628% ( 436) 00:11:10.154 9055.884 - 9115.462: 40.7165% ( 440) 00:11:10.154 9115.462 - 9175.040: 43.9710% ( 427) 00:11:10.154 9175.040 - 9234.618: 47.3323% ( 441) 00:11:10.154 9234.618 - 9294.196: 50.5259% ( 419) 00:11:10.154 9294.196 - 9353.775: 53.4832% ( 388) 00:11:10.154 9353.775 - 9413.353: 56.1204% ( 346) 00:11:10.154 9413.353 - 9472.931: 58.7500% ( 345) 00:11:10.154 9472.931 - 9532.509: 61.3720% ( 344) 00:11:10.154 9532.509 - 9592.087: 63.7424% ( 311) 00:11:10.154 9592.087 - 9651.665: 65.8613% ( 278) 00:11:10.154 9651.665 - 9711.244: 67.7287% ( 245) 00:11:10.154 9711.244 - 9770.822: 69.3521% ( 213) 00:11:10.154 9770.822 - 9830.400: 70.8537% ( 197) 00:11:10.154 9830.400 - 9889.978: 72.2485% ( 183) 00:11:10.154 9889.978 - 9949.556: 73.6052% ( 178) 00:11:10.154 9949.556 - 10009.135: 74.9314% ( 174) 00:11:10.154 10009.135 - 10068.713: 76.2271% ( 170) 00:11:10.154 10068.713 - 10128.291: 77.4924% ( 166) 00:11:10.154 10128.291 - 10187.869: 78.7271% ( 162) 00:11:10.154 10187.869 - 10247.447: 79.8857% ( 152) 00:11:10.154 10247.447 - 10307.025: 81.0976% ( 159) 00:11:10.154 10307.025 - 10366.604: 82.3018% ( 158) 00:11:10.154 10366.604 - 10426.182: 83.4451% ( 150) 00:11:10.154 10426.182 - 10485.760: 84.4741% ( 135) 00:11:10.154 10485.760 - 10545.338: 85.4954% ( 134) 00:11:10.154 10545.338 - 10604.916: 86.3872% ( 117) 00:11:10.154 10604.916 - 10664.495: 87.2027% ( 107) 00:11:10.154 10664.495 - 10724.073: 87.9268% ( 95) 00:11:10.154 10724.073 - 10783.651: 88.5518% ( 82) 00:11:10.154 10783.651 - 10843.229: 89.1082% ( 73) 00:11:10.154 10843.229 - 10902.807: 89.6113% ( 66) 00:11:10.154 10902.807 - 10962.385: 90.0381% ( 56) 00:11:10.154 10962.385 - 11021.964: 90.3506% ( 41) 00:11:10.154 11021.964 - 11081.542: 90.6174% ( 35) 00:11:10.154 11081.542 - 11141.120: 90.8613% ( 32) 00:11:10.154 11141.120 - 11200.698: 91.0747% ( 28) 00:11:10.154 11200.698 - 11260.276: 91.2424% ( 22) 00:11:10.154 11260.276 - 11319.855: 91.4253% ( 24) 00:11:10.154 11319.855 - 11379.433: 91.6159% ( 25) 00:11:10.154 11379.433 - 11439.011: 91.8140% ( 26) 00:11:10.154 11439.011 - 11498.589: 92.0351% ( 29) 00:11:10.154 11498.589 - 11558.167: 92.2561% ( 29) 00:11:10.154 11558.167 - 11617.745: 92.4848% ( 30) 00:11:10.154 11617.745 - 11677.324: 92.6905% ( 27) 00:11:10.154 11677.324 - 11736.902: 92.8735% ( 24) 00:11:10.154 11736.902 - 11796.480: 93.1402% ( 35) 00:11:10.154 11796.480 - 11856.058: 93.3841% ( 32) 00:11:10.154 11856.058 - 11915.636: 93.6662% ( 37) 00:11:10.154 11915.636 - 11975.215: 93.9710% ( 40) 00:11:10.154 11975.215 - 12034.793: 94.2378% ( 35) 00:11:10.154 12034.793 - 12094.371: 94.5046% ( 35) 00:11:10.154 12094.371 - 12153.949: 94.7561% ( 33) 00:11:10.154 12153.949 - 12213.527: 95.0534% ( 39) 00:11:10.154 12213.527 - 12273.105: 95.2820% ( 30) 00:11:10.154 12273.105 - 12332.684: 95.5259% ( 32) 00:11:10.154 12332.684 - 12392.262: 95.7546% ( 30) 00:11:10.154 12392.262 - 12451.840: 95.9985% ( 32) 00:11:10.154 12451.840 - 12511.418: 96.2119% ( 28) 00:11:10.154 12511.418 - 12570.996: 96.4634% ( 33) 00:11:10.154 12570.996 - 12630.575: 96.6921% ( 30) 00:11:10.154 12630.575 - 12690.153: 96.8979% ( 27) 00:11:10.154 12690.153 - 12749.731: 97.0960% ( 26) 00:11:10.154 12749.731 - 12809.309: 97.2866% ( 25) 00:11:10.154 12809.309 - 12868.887: 97.4848% ( 26) 00:11:10.154 12868.887 - 12928.465: 97.6677% ( 24) 00:11:10.154 12928.465 - 12988.044: 97.7896% ( 16) 00:11:10.154 12988.044 - 13047.622: 97.9192% ( 17) 00:11:10.154 13047.622 - 13107.200: 98.0183% ( 13) 00:11:10.154 13107.200 - 13166.778: 98.0869% ( 9) 00:11:10.155 13166.778 - 13226.356: 98.1860% ( 13) 00:11:10.155 13226.356 - 13285.935: 98.2622% ( 10) 00:11:10.155 13285.935 - 13345.513: 98.3232% ( 8) 00:11:10.155 13345.513 - 13405.091: 98.3689% ( 6) 00:11:10.155 13405.091 - 13464.669: 98.4223% ( 7) 00:11:10.155 13464.669 - 13524.247: 98.4604% ( 5) 00:11:10.155 13524.247 - 13583.825: 98.4985% ( 5) 00:11:10.155 13583.825 - 13643.404: 98.5290% ( 4) 00:11:10.155 13643.404 - 13702.982: 98.5366% ( 1) 00:11:10.155 14000.873 - 14060.451: 98.5442% ( 1) 00:11:10.155 14060.451 - 14120.029: 98.5747% ( 4) 00:11:10.155 14120.029 - 14179.607: 98.5976% ( 3) 00:11:10.155 14179.607 - 14239.185: 98.6204% ( 3) 00:11:10.155 14239.185 - 14298.764: 98.6509% ( 4) 00:11:10.155 14298.764 - 14358.342: 98.6662% ( 2) 00:11:10.155 14358.342 - 14417.920: 98.6966% ( 4) 00:11:10.155 14417.920 - 14477.498: 98.7195% ( 3) 00:11:10.155 14477.498 - 14537.076: 98.7424% ( 3) 00:11:10.155 14537.076 - 14596.655: 98.7729% ( 4) 00:11:10.155 14596.655 - 14656.233: 98.7957% ( 3) 00:11:10.155 14656.233 - 14715.811: 98.8186% ( 3) 00:11:10.155 14715.811 - 14775.389: 98.8415% ( 3) 00:11:10.155 14775.389 - 14834.967: 98.8720% ( 4) 00:11:10.155 14834.967 - 14894.545: 98.8872% ( 2) 00:11:10.155 14894.545 - 14954.124: 98.9101% ( 3) 00:11:10.155 14954.124 - 15013.702: 98.9329% ( 3) 00:11:10.155 15013.702 - 15073.280: 98.9558% ( 3) 00:11:10.155 15073.280 - 15132.858: 98.9787% ( 3) 00:11:10.155 15132.858 - 15192.436: 99.0015% ( 3) 00:11:10.155 15192.436 - 15252.015: 99.0168% ( 2) 00:11:10.155 15252.015 - 15371.171: 99.0244% ( 1) 00:11:10.155 27763.433 - 27882.589: 99.0320% ( 1) 00:11:10.155 27882.589 - 28001.745: 99.0549% ( 3) 00:11:10.155 28001.745 - 28120.902: 99.0777% ( 3) 00:11:10.155 28120.902 - 28240.058: 99.1006% ( 3) 00:11:10.155 28240.058 - 28359.215: 99.1235% ( 3) 00:11:10.155 28359.215 - 28478.371: 99.1387% ( 2) 00:11:10.155 28478.371 - 28597.527: 99.1616% ( 3) 00:11:10.155 28597.527 - 28716.684: 99.1845% ( 3) 00:11:10.155 28716.684 - 28835.840: 99.2073% ( 3) 00:11:10.155 28835.840 - 28954.996: 99.2302% ( 3) 00:11:10.155 28954.996 - 29074.153: 99.2530% ( 3) 00:11:10.155 29074.153 - 29193.309: 99.2759% ( 3) 00:11:10.155 29193.309 - 29312.465: 99.2988% ( 3) 00:11:10.155 29312.465 - 29431.622: 99.3216% ( 3) 00:11:10.155 29431.622 - 29550.778: 99.3445% ( 3) 00:11:10.155 29550.778 - 29669.935: 99.3674% ( 3) 00:11:10.155 29669.935 - 29789.091: 99.3826% ( 2) 00:11:10.155 29789.091 - 29908.247: 99.4131% ( 4) 00:11:10.155 29908.247 - 30027.404: 99.4360% ( 3) 00:11:10.155 30027.404 - 30146.560: 99.4588% ( 3) 00:11:10.155 30146.560 - 30265.716: 99.4817% ( 3) 00:11:10.155 30265.716 - 30384.873: 99.5046% ( 3) 00:11:10.155 30384.873 - 30504.029: 99.5122% ( 1) 00:11:10.155 35985.222 - 36223.535: 99.5427% ( 4) 00:11:10.155 36223.535 - 36461.847: 99.5884% ( 6) 00:11:10.155 36461.847 - 36700.160: 99.6341% ( 6) 00:11:10.155 36700.160 - 36938.473: 99.6723% ( 5) 00:11:10.155 36938.473 - 37176.785: 99.7180% ( 6) 00:11:10.155 37176.785 - 37415.098: 99.7637% ( 6) 00:11:10.155 37415.098 - 37653.411: 99.8095% ( 6) 00:11:10.155 37653.411 - 37891.724: 99.8552% ( 6) 00:11:10.155 37891.724 - 38130.036: 99.9009% ( 6) 00:11:10.155 38130.036 - 38368.349: 99.9466% ( 6) 00:11:10.155 38368.349 - 38606.662: 99.9924% ( 6) 00:11:10.155 38606.662 - 38844.975: 100.0000% ( 1) 00:11:10.155 00:11:10.155 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:10.155 ============================================================================== 00:11:10.155 Range in us Cumulative IO count 00:11:10.155 7804.742 - 7864.320: 0.0305% ( 4) 00:11:10.155 7864.320 - 7923.898: 0.0534% ( 3) 00:11:10.155 7923.898 - 7983.476: 0.0915% ( 5) 00:11:10.155 7983.476 - 8043.055: 0.1448% ( 7) 00:11:10.155 8043.055 - 8102.633: 0.2668% ( 16) 00:11:10.155 8102.633 - 8162.211: 0.5030% ( 31) 00:11:10.155 8162.211 - 8221.789: 0.8765% ( 49) 00:11:10.155 8221.789 - 8281.367: 1.5091% ( 83) 00:11:10.155 8281.367 - 8340.945: 2.4466% ( 123) 00:11:10.155 8340.945 - 8400.524: 3.8338% ( 182) 00:11:10.155 8400.524 - 8460.102: 5.6250% ( 235) 00:11:10.155 8460.102 - 8519.680: 8.0107% ( 313) 00:11:10.155 8519.680 - 8579.258: 10.7698% ( 362) 00:11:10.155 8579.258 - 8638.836: 13.7805% ( 395) 00:11:10.155 8638.836 - 8698.415: 16.9741% ( 419) 00:11:10.155 8698.415 - 8757.993: 20.4116% ( 451) 00:11:10.155 8757.993 - 8817.571: 23.7805% ( 442) 00:11:10.155 8817.571 - 8877.149: 27.1341% ( 440) 00:11:10.155 8877.149 - 8936.727: 30.5183% ( 444) 00:11:10.155 8936.727 - 8996.305: 33.8262% ( 434) 00:11:10.155 8996.305 - 9055.884: 37.2637% ( 451) 00:11:10.155 9055.884 - 9115.462: 40.6784% ( 448) 00:11:10.155 9115.462 - 9175.040: 43.9787% ( 433) 00:11:10.155 9175.040 - 9234.618: 47.2713% ( 432) 00:11:10.155 9234.618 - 9294.196: 50.4802% ( 421) 00:11:10.155 9294.196 - 9353.775: 53.6204% ( 412) 00:11:10.155 9353.775 - 9413.353: 56.4710% ( 374) 00:11:10.155 9413.353 - 9472.931: 59.1159% ( 347) 00:11:10.155 9472.931 - 9532.509: 61.6159% ( 328) 00:11:10.155 9532.509 - 9592.087: 63.8415% ( 292) 00:11:10.155 9592.087 - 9651.665: 65.8308% ( 261) 00:11:10.155 9651.665 - 9711.244: 67.6753% ( 242) 00:11:10.155 9711.244 - 9770.822: 69.3216% ( 216) 00:11:10.155 9770.822 - 9830.400: 70.7927% ( 193) 00:11:10.155 9830.400 - 9889.978: 72.1951% ( 184) 00:11:10.155 9889.978 - 9949.556: 73.4756% ( 168) 00:11:10.155 9949.556 - 10009.135: 74.8552% ( 181) 00:11:10.155 10009.135 - 10068.713: 76.1280% ( 167) 00:11:10.155 10068.713 - 10128.291: 77.3095% ( 155) 00:11:10.155 10128.291 - 10187.869: 78.5137% ( 158) 00:11:10.155 10187.869 - 10247.447: 79.7409% ( 161) 00:11:10.155 10247.447 - 10307.025: 80.9680% ( 161) 00:11:10.155 10307.025 - 10366.604: 82.1646% ( 157) 00:11:10.155 10366.604 - 10426.182: 83.2698% ( 145) 00:11:10.155 10426.182 - 10485.760: 84.4055% ( 149) 00:11:10.155 10485.760 - 10545.338: 85.4421% ( 136) 00:11:10.155 10545.338 - 10604.916: 86.4329% ( 130) 00:11:10.155 10604.916 - 10664.495: 87.3552% ( 121) 00:11:10.155 10664.495 - 10724.073: 88.0869% ( 96) 00:11:10.155 10724.073 - 10783.651: 88.7576% ( 88) 00:11:10.155 10783.651 - 10843.229: 89.3674% ( 80) 00:11:10.155 10843.229 - 10902.807: 89.8780% ( 67) 00:11:10.155 10902.807 - 10962.385: 90.3430% ( 61) 00:11:10.155 10962.385 - 11021.964: 90.7622% ( 55) 00:11:10.155 11021.964 - 11081.542: 91.0899% ( 43) 00:11:10.155 11081.542 - 11141.120: 91.3643% ( 36) 00:11:10.155 11141.120 - 11200.698: 91.5701% ( 27) 00:11:10.155 11200.698 - 11260.276: 91.6997% ( 17) 00:11:10.155 11260.276 - 11319.855: 91.8445% ( 19) 00:11:10.155 11319.855 - 11379.433: 92.0046% ( 21) 00:11:10.155 11379.433 - 11439.011: 92.1951% ( 25) 00:11:10.155 11439.011 - 11498.589: 92.3933% ( 26) 00:11:10.155 11498.589 - 11558.167: 92.6143% ( 29) 00:11:10.155 11558.167 - 11617.745: 92.8506% ( 31) 00:11:10.155 11617.745 - 11677.324: 93.1021% ( 33) 00:11:10.155 11677.324 - 11736.902: 93.3079% ( 27) 00:11:10.155 11736.902 - 11796.480: 93.5442% ( 31) 00:11:10.155 11796.480 - 11856.058: 93.7805% ( 31) 00:11:10.155 11856.058 - 11915.636: 94.0625% ( 37) 00:11:10.155 11915.636 - 11975.215: 94.3064% ( 32) 00:11:10.155 11975.215 - 12034.793: 94.6113% ( 40) 00:11:10.155 12034.793 - 12094.371: 94.8933% ( 37) 00:11:10.155 12094.371 - 12153.949: 95.1372% ( 32) 00:11:10.155 12153.949 - 12213.527: 95.3659% ( 30) 00:11:10.155 12213.527 - 12273.105: 95.6021% ( 31) 00:11:10.155 12273.105 - 12332.684: 95.8384% ( 31) 00:11:10.155 12332.684 - 12392.262: 96.0442% ( 27) 00:11:10.155 12392.262 - 12451.840: 96.2729% ( 30) 00:11:10.155 12451.840 - 12511.418: 96.4787% ( 27) 00:11:10.155 12511.418 - 12570.996: 96.6845% ( 27) 00:11:10.155 12570.996 - 12630.575: 96.8826% ( 26) 00:11:10.155 12630.575 - 12690.153: 97.0732% ( 25) 00:11:10.155 12690.153 - 12749.731: 97.2409% ( 22) 00:11:10.155 12749.731 - 12809.309: 97.3628% ( 16) 00:11:10.155 12809.309 - 12868.887: 97.4771% ( 15) 00:11:10.155 12868.887 - 12928.465: 97.5610% ( 11) 00:11:10.155 12928.465 - 12988.044: 97.6829% ( 16) 00:11:10.155 12988.044 - 13047.622: 97.7896% ( 14) 00:11:10.155 13047.622 - 13107.200: 97.8963% ( 14) 00:11:10.155 13107.200 - 13166.778: 97.9954% ( 13) 00:11:10.155 13166.778 - 13226.356: 98.0564% ( 8) 00:11:10.155 13226.356 - 13285.935: 98.1098% ( 7) 00:11:10.156 13285.935 - 13345.513: 98.1479% ( 5) 00:11:10.156 13345.513 - 13405.091: 98.2012% ( 7) 00:11:10.156 13405.091 - 13464.669: 98.2393% ( 5) 00:11:10.156 13464.669 - 13524.247: 98.2774% ( 5) 00:11:10.156 13524.247 - 13583.825: 98.3079% ( 4) 00:11:10.156 13583.825 - 13643.404: 98.3308% ( 3) 00:11:10.156 13643.404 - 13702.982: 98.3537% ( 3) 00:11:10.156 13702.982 - 13762.560: 98.3918% ( 5) 00:11:10.156 13762.560 - 13822.138: 98.4451% ( 7) 00:11:10.156 13822.138 - 13881.716: 98.4909% ( 6) 00:11:10.156 13881.716 - 13941.295: 98.5366% ( 6) 00:11:10.156 13941.295 - 14000.873: 98.5823% ( 6) 00:11:10.156 14000.873 - 14060.451: 98.6280% ( 6) 00:11:10.156 14060.451 - 14120.029: 98.6814% ( 7) 00:11:10.156 14120.029 - 14179.607: 98.7119% ( 4) 00:11:10.156 14179.607 - 14239.185: 98.7348% ( 3) 00:11:10.156 14239.185 - 14298.764: 98.7576% ( 3) 00:11:10.156 14298.764 - 14358.342: 98.7805% ( 3) 00:11:10.156 14358.342 - 14417.920: 98.8034% ( 3) 00:11:10.156 14417.920 - 14477.498: 98.8186% ( 2) 00:11:10.156 14477.498 - 14537.076: 98.8415% ( 3) 00:11:10.156 14537.076 - 14596.655: 98.8567% ( 2) 00:11:10.156 14596.655 - 14656.233: 98.8720% ( 2) 00:11:10.156 14656.233 - 14715.811: 98.8948% ( 3) 00:11:10.156 14715.811 - 14775.389: 98.9101% ( 2) 00:11:10.156 14775.389 - 14834.967: 98.9329% ( 3) 00:11:10.156 14834.967 - 14894.545: 98.9558% ( 3) 00:11:10.156 14894.545 - 14954.124: 98.9787% ( 3) 00:11:10.156 14954.124 - 15013.702: 98.9939% ( 2) 00:11:10.156 15013.702 - 15073.280: 99.0168% ( 3) 00:11:10.156 15073.280 - 15132.858: 99.0244% ( 1) 00:11:10.156 25856.931 - 25976.087: 99.0396% ( 2) 00:11:10.156 25976.087 - 26095.244: 99.0625% ( 3) 00:11:10.156 26095.244 - 26214.400: 99.0854% ( 3) 00:11:10.156 26214.400 - 26333.556: 99.1159% ( 4) 00:11:10.156 26333.556 - 26452.713: 99.1387% ( 3) 00:11:10.156 26452.713 - 26571.869: 99.1540% ( 2) 00:11:10.156 26571.869 - 26691.025: 99.1768% ( 3) 00:11:10.156 26691.025 - 26810.182: 99.1997% ( 3) 00:11:10.156 26810.182 - 26929.338: 99.2226% ( 3) 00:11:10.156 26929.338 - 27048.495: 99.2454% ( 3) 00:11:10.156 27048.495 - 27167.651: 99.2683% ( 3) 00:11:10.156 27167.651 - 27286.807: 99.2912% ( 3) 00:11:10.156 27286.807 - 27405.964: 99.3064% ( 2) 00:11:10.156 27405.964 - 27525.120: 99.3293% ( 3) 00:11:10.156 27525.120 - 27644.276: 99.3521% ( 3) 00:11:10.156 27644.276 - 27763.433: 99.3750% ( 3) 00:11:10.156 27763.433 - 27882.589: 99.3979% ( 3) 00:11:10.156 27882.589 - 28001.745: 99.4207% ( 3) 00:11:10.156 28001.745 - 28120.902: 99.4436% ( 3) 00:11:10.156 28120.902 - 28240.058: 99.4665% ( 3) 00:11:10.156 28240.058 - 28359.215: 99.4893% ( 3) 00:11:10.156 28359.215 - 28478.371: 99.5122% ( 3) 00:11:10.156 33840.407 - 34078.720: 99.5198% ( 1) 00:11:10.156 34078.720 - 34317.033: 99.5579% ( 5) 00:11:10.156 34317.033 - 34555.345: 99.5960% ( 5) 00:11:10.156 34555.345 - 34793.658: 99.6418% ( 6) 00:11:10.156 34793.658 - 35031.971: 99.6875% ( 6) 00:11:10.156 35031.971 - 35270.284: 99.7332% ( 6) 00:11:10.156 35270.284 - 35508.596: 99.7713% ( 5) 00:11:10.156 35508.596 - 35746.909: 99.8171% ( 6) 00:11:10.156 35746.909 - 35985.222: 99.8628% ( 6) 00:11:10.156 35985.222 - 36223.535: 99.9085% ( 6) 00:11:10.156 36223.535 - 36461.847: 99.9543% ( 6) 00:11:10.156 36461.847 - 36700.160: 99.9924% ( 5) 00:11:10.156 36700.160 - 36938.473: 100.0000% ( 1) 00:11:10.156 00:11:10.156 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:10.156 ============================================================================== 00:11:10.156 Range in us Cumulative IO count 00:11:10.156 7745.164 - 7804.742: 0.0076% ( 1) 00:11:10.156 7804.742 - 7864.320: 0.0305% ( 3) 00:11:10.156 7864.320 - 7923.898: 0.0610% ( 4) 00:11:10.156 7923.898 - 7983.476: 0.0991% ( 5) 00:11:10.156 7983.476 - 8043.055: 0.1524% ( 7) 00:11:10.156 8043.055 - 8102.633: 0.3049% ( 20) 00:11:10.156 8102.633 - 8162.211: 0.4649% ( 21) 00:11:10.156 8162.211 - 8221.789: 0.8308% ( 48) 00:11:10.156 8221.789 - 8281.367: 1.4558% ( 82) 00:11:10.156 8281.367 - 8340.945: 2.4314% ( 128) 00:11:10.156 8340.945 - 8400.524: 3.7957% ( 179) 00:11:10.156 8400.524 - 8460.102: 5.5945% ( 236) 00:11:10.156 8460.102 - 8519.680: 7.7439% ( 282) 00:11:10.156 8519.680 - 8579.258: 10.3735% ( 345) 00:11:10.156 8579.258 - 8638.836: 13.3765% ( 394) 00:11:10.156 8638.836 - 8698.415: 16.6082% ( 424) 00:11:10.156 8698.415 - 8757.993: 19.9619% ( 440) 00:11:10.156 8757.993 - 8817.571: 23.3155% ( 440) 00:11:10.156 8817.571 - 8877.149: 26.7378% ( 449) 00:11:10.156 8877.149 - 8936.727: 30.1677% ( 450) 00:11:10.156 8936.727 - 8996.305: 33.5595% ( 445) 00:11:10.156 8996.305 - 9055.884: 37.0122% ( 453) 00:11:10.156 9055.884 - 9115.462: 40.5183% ( 460) 00:11:10.156 9115.462 - 9175.040: 43.9482% ( 450) 00:11:10.156 9175.040 - 9234.618: 47.2409% ( 432) 00:11:10.156 9234.618 - 9294.196: 50.3887% ( 413) 00:11:10.156 9294.196 - 9353.775: 53.3994% ( 395) 00:11:10.156 9353.775 - 9413.353: 56.3796% ( 391) 00:11:10.156 9413.353 - 9472.931: 59.1311% ( 361) 00:11:10.156 9472.931 - 9532.509: 61.5549% ( 318) 00:11:10.156 9532.509 - 9592.087: 63.6966% ( 281) 00:11:10.156 9592.087 - 9651.665: 65.7393% ( 268) 00:11:10.156 9651.665 - 9711.244: 67.6372% ( 249) 00:11:10.156 9711.244 - 9770.822: 69.2607% ( 213) 00:11:10.156 9770.822 - 9830.400: 70.7317% ( 193) 00:11:10.156 9830.400 - 9889.978: 72.1570% ( 187) 00:11:10.156 9889.978 - 9949.556: 73.4527% ( 170) 00:11:10.156 9949.556 - 10009.135: 74.8018% ( 177) 00:11:10.156 10009.135 - 10068.713: 76.0595% ( 165) 00:11:10.156 10068.713 - 10128.291: 77.2561% ( 157) 00:11:10.156 10128.291 - 10187.869: 78.5213% ( 166) 00:11:10.156 10187.869 - 10247.447: 79.8018% ( 168) 00:11:10.156 10247.447 - 10307.025: 81.0747% ( 167) 00:11:10.156 10307.025 - 10366.604: 82.2561% ( 155) 00:11:10.156 10366.604 - 10426.182: 83.4527% ( 157) 00:11:10.156 10426.182 - 10485.760: 84.5351% ( 142) 00:11:10.156 10485.760 - 10545.338: 85.5335% ( 131) 00:11:10.156 10545.338 - 10604.916: 86.5244% ( 130) 00:11:10.156 10604.916 - 10664.495: 87.3628% ( 110) 00:11:10.156 10664.495 - 10724.073: 88.1326% ( 101) 00:11:10.156 10724.073 - 10783.651: 88.7881% ( 86) 00:11:10.156 10783.651 - 10843.229: 89.3902% ( 79) 00:11:10.156 10843.229 - 10902.807: 89.8933% ( 66) 00:11:10.156 10902.807 - 10962.385: 90.3887% ( 65) 00:11:10.156 10962.385 - 11021.964: 90.8155% ( 56) 00:11:10.156 11021.964 - 11081.542: 91.1128% ( 39) 00:11:10.156 11081.542 - 11141.120: 91.3796% ( 35) 00:11:10.156 11141.120 - 11200.698: 91.6082% ( 30) 00:11:10.156 11200.698 - 11260.276: 91.8064% ( 26) 00:11:10.156 11260.276 - 11319.855: 91.9970% ( 25) 00:11:10.156 11319.855 - 11379.433: 92.2409% ( 32) 00:11:10.156 11379.433 - 11439.011: 92.4390% ( 26) 00:11:10.156 11439.011 - 11498.589: 92.6524% ( 28) 00:11:10.156 11498.589 - 11558.167: 92.8659% ( 28) 00:11:10.156 11558.167 - 11617.745: 93.0869% ( 29) 00:11:10.156 11617.745 - 11677.324: 93.3232% ( 31) 00:11:10.156 11677.324 - 11736.902: 93.5290% ( 27) 00:11:10.156 11736.902 - 11796.480: 93.7729% ( 32) 00:11:10.156 11796.480 - 11856.058: 94.0015% ( 30) 00:11:10.156 11856.058 - 11915.636: 94.2302% ( 30) 00:11:10.156 11915.636 - 11975.215: 94.4970% ( 35) 00:11:10.156 11975.215 - 12034.793: 94.7637% ( 35) 00:11:10.156 12034.793 - 12094.371: 95.0457% ( 37) 00:11:10.156 12094.371 - 12153.949: 95.2896% ( 32) 00:11:10.156 12153.949 - 12213.527: 95.5564% ( 35) 00:11:10.156 12213.527 - 12273.105: 95.7470% ( 25) 00:11:10.156 12273.105 - 12332.684: 95.9756% ( 30) 00:11:10.156 12332.684 - 12392.262: 96.1662% ( 25) 00:11:10.156 12392.262 - 12451.840: 96.3872% ( 29) 00:11:10.156 12451.840 - 12511.418: 96.5549% ( 22) 00:11:10.156 12511.418 - 12570.996: 96.7378% ( 24) 00:11:10.156 12570.996 - 12630.575: 96.8979% ( 21) 00:11:10.156 12630.575 - 12690.153: 97.0274% ( 17) 00:11:10.156 12690.153 - 12749.731: 97.1646% ( 18) 00:11:10.156 12749.731 - 12809.309: 97.2790% ( 15) 00:11:10.156 12809.309 - 12868.887: 97.4085% ( 17) 00:11:10.156 12868.887 - 12928.465: 97.4848% ( 10) 00:11:10.156 12928.465 - 12988.044: 97.5762% ( 12) 00:11:10.156 12988.044 - 13047.622: 97.6829% ( 14) 00:11:10.156 13047.622 - 13107.200: 97.7668% ( 11) 00:11:10.156 13107.200 - 13166.778: 97.8277% ( 8) 00:11:10.156 13166.778 - 13226.356: 97.8659% ( 5) 00:11:10.157 13226.356 - 13285.935: 97.9116% ( 6) 00:11:10.157 13285.935 - 13345.513: 97.9802% ( 9) 00:11:10.157 13345.513 - 13405.091: 98.0335% ( 7) 00:11:10.157 13405.091 - 13464.669: 98.1021% ( 9) 00:11:10.157 13464.669 - 13524.247: 98.1555% ( 7) 00:11:10.157 13524.247 - 13583.825: 98.2012% ( 6) 00:11:10.157 13583.825 - 13643.404: 98.2241% ( 3) 00:11:10.157 13643.404 - 13702.982: 98.2470% ( 3) 00:11:10.157 13702.982 - 13762.560: 98.2774% ( 4) 00:11:10.157 13762.560 - 13822.138: 98.3232% ( 6) 00:11:10.157 13822.138 - 13881.716: 98.3841% ( 8) 00:11:10.157 13881.716 - 13941.295: 98.4299% ( 6) 00:11:10.157 13941.295 - 14000.873: 98.4756% ( 6) 00:11:10.157 14000.873 - 14060.451: 98.5290% ( 7) 00:11:10.157 14060.451 - 14120.029: 98.5747% ( 6) 00:11:10.157 14120.029 - 14179.607: 98.6280% ( 7) 00:11:10.157 14179.607 - 14239.185: 98.6738% ( 6) 00:11:10.157 14239.185 - 14298.764: 98.7271% ( 7) 00:11:10.157 14298.764 - 14358.342: 98.7729% ( 6) 00:11:10.157 14358.342 - 14417.920: 98.8034% ( 4) 00:11:10.157 14417.920 - 14477.498: 98.8338% ( 4) 00:11:10.157 14477.498 - 14537.076: 98.8567% ( 3) 00:11:10.157 14537.076 - 14596.655: 98.8796% ( 3) 00:11:10.157 14596.655 - 14656.233: 98.9024% ( 3) 00:11:10.157 14656.233 - 14715.811: 98.9253% ( 3) 00:11:10.157 14715.811 - 14775.389: 98.9482% ( 3) 00:11:10.157 14775.389 - 14834.967: 98.9787% ( 4) 00:11:10.157 14834.967 - 14894.545: 99.0015% ( 3) 00:11:10.157 14894.545 - 14954.124: 99.0244% ( 3) 00:11:10.157 23116.335 - 23235.491: 99.0473% ( 3) 00:11:10.157 23235.491 - 23354.647: 99.0701% ( 3) 00:11:10.157 23354.647 - 23473.804: 99.0930% ( 3) 00:11:10.157 23473.804 - 23592.960: 99.1159% ( 3) 00:11:10.157 23592.960 - 23712.116: 99.1387% ( 3) 00:11:10.157 23712.116 - 23831.273: 99.1540% ( 2) 00:11:10.157 23831.273 - 23950.429: 99.1768% ( 3) 00:11:10.157 23950.429 - 24069.585: 99.1997% ( 3) 00:11:10.157 24069.585 - 24188.742: 99.2226% ( 3) 00:11:10.157 24188.742 - 24307.898: 99.2454% ( 3) 00:11:10.157 24307.898 - 24427.055: 99.2683% ( 3) 00:11:10.157 24427.055 - 24546.211: 99.2912% ( 3) 00:11:10.157 24546.211 - 24665.367: 99.3064% ( 2) 00:11:10.157 24665.367 - 24784.524: 99.3293% ( 3) 00:11:10.157 24784.524 - 24903.680: 99.3521% ( 3) 00:11:10.157 24903.680 - 25022.836: 99.3750% ( 3) 00:11:10.157 25022.836 - 25141.993: 99.3979% ( 3) 00:11:10.157 25141.993 - 25261.149: 99.4207% ( 3) 00:11:10.157 25261.149 - 25380.305: 99.4436% ( 3) 00:11:10.157 25380.305 - 25499.462: 99.4665% ( 3) 00:11:10.157 25499.462 - 25618.618: 99.4893% ( 3) 00:11:10.157 25618.618 - 25737.775: 99.5046% ( 2) 00:11:10.157 25737.775 - 25856.931: 99.5122% ( 1) 00:11:10.157 31218.967 - 31457.280: 99.5274% ( 2) 00:11:10.157 31457.280 - 31695.593: 99.5732% ( 6) 00:11:10.157 31695.593 - 31933.905: 99.6189% ( 6) 00:11:10.157 31933.905 - 32172.218: 99.6570% ( 5) 00:11:10.157 32172.218 - 32410.531: 99.7027% ( 6) 00:11:10.157 32410.531 - 32648.844: 99.7409% ( 5) 00:11:10.157 32648.844 - 32887.156: 99.7866% ( 6) 00:11:10.157 32887.156 - 33125.469: 99.8323% ( 6) 00:11:10.157 33125.469 - 33363.782: 99.8780% ( 6) 00:11:10.157 33363.782 - 33602.095: 99.9238% ( 6) 00:11:10.157 33602.095 - 33840.407: 99.9695% ( 6) 00:11:10.157 33840.407 - 34078.720: 100.0000% ( 4) 00:11:10.157 00:11:10.157 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:10.157 ============================================================================== 00:11:10.157 Range in us Cumulative IO count 00:11:10.157 7804.742 - 7864.320: 0.0076% ( 1) 00:11:10.157 7864.320 - 7923.898: 0.0381% ( 4) 00:11:10.157 7923.898 - 7983.476: 0.0610% ( 3) 00:11:10.157 7983.476 - 8043.055: 0.1372% ( 10) 00:11:10.157 8043.055 - 8102.633: 0.2515% ( 15) 00:11:10.157 8102.633 - 8162.211: 0.4421% ( 25) 00:11:10.157 8162.211 - 8221.789: 0.7546% ( 41) 00:11:10.157 8221.789 - 8281.367: 1.4024% ( 85) 00:11:10.157 8281.367 - 8340.945: 2.4314% ( 135) 00:11:10.157 8340.945 - 8400.524: 3.8643% ( 188) 00:11:10.157 8400.524 - 8460.102: 5.6860% ( 239) 00:11:10.157 8460.102 - 8519.680: 7.9116% ( 292) 00:11:10.157 8519.680 - 8579.258: 10.5412% ( 345) 00:11:10.157 8579.258 - 8638.836: 13.5518% ( 395) 00:11:10.157 8638.836 - 8698.415: 16.7073% ( 414) 00:11:10.157 8698.415 - 8757.993: 19.9085% ( 420) 00:11:10.157 8757.993 - 8817.571: 23.2317% ( 436) 00:11:10.157 8817.571 - 8877.149: 26.5701% ( 438) 00:11:10.157 8877.149 - 8936.727: 30.0686% ( 459) 00:11:10.157 8936.727 - 8996.305: 33.4832% ( 448) 00:11:10.157 8996.305 - 9055.884: 37.0198% ( 464) 00:11:10.157 9055.884 - 9115.462: 40.4954% ( 456) 00:11:10.157 9115.462 - 9175.040: 43.9482% ( 453) 00:11:10.157 9175.040 - 9234.618: 47.2332% ( 431) 00:11:10.157 9234.618 - 9294.196: 50.4421% ( 421) 00:11:10.157 9294.196 - 9353.775: 53.4223% ( 391) 00:11:10.157 9353.775 - 9413.353: 56.1814% ( 362) 00:11:10.157 9413.353 - 9472.931: 58.9329% ( 361) 00:11:10.157 9472.931 - 9532.509: 61.3262% ( 314) 00:11:10.157 9532.509 - 9592.087: 63.6052% ( 299) 00:11:10.157 9592.087 - 9651.665: 65.7088% ( 276) 00:11:10.157 9651.665 - 9711.244: 67.5381% ( 240) 00:11:10.157 9711.244 - 9770.822: 69.1845% ( 216) 00:11:10.157 9770.822 - 9830.400: 70.7165% ( 201) 00:11:10.157 9830.400 - 9889.978: 72.1265% ( 185) 00:11:10.157 9889.978 - 9949.556: 73.5290% ( 184) 00:11:10.157 9949.556 - 10009.135: 74.8095% ( 168) 00:11:10.157 10009.135 - 10068.713: 76.0976% ( 169) 00:11:10.157 10068.713 - 10128.291: 77.3628% ( 166) 00:11:10.157 10128.291 - 10187.869: 78.5823% ( 160) 00:11:10.157 10187.869 - 10247.447: 79.8552% ( 167) 00:11:10.157 10247.447 - 10307.025: 81.0366% ( 155) 00:11:10.157 10307.025 - 10366.604: 82.1799% ( 150) 00:11:10.157 10366.604 - 10426.182: 83.3079% ( 148) 00:11:10.157 10426.182 - 10485.760: 84.3674% ( 139) 00:11:10.157 10485.760 - 10545.338: 85.4116% ( 137) 00:11:10.157 10545.338 - 10604.916: 86.4024% ( 130) 00:11:10.157 10604.916 - 10664.495: 87.2713% ( 114) 00:11:10.157 10664.495 - 10724.073: 88.0107% ( 97) 00:11:10.157 10724.073 - 10783.651: 88.7271% ( 94) 00:11:10.157 10783.651 - 10843.229: 89.3216% ( 78) 00:11:10.157 10843.229 - 10902.807: 89.8476% ( 69) 00:11:10.157 10902.807 - 10962.385: 90.3201% ( 62) 00:11:10.157 10962.385 - 11021.964: 90.7241% ( 53) 00:11:10.157 11021.964 - 11081.542: 91.0518% ( 43) 00:11:10.157 11081.542 - 11141.120: 91.3720% ( 42) 00:11:10.157 11141.120 - 11200.698: 91.6082% ( 31) 00:11:10.157 11200.698 - 11260.276: 91.8216% ( 28) 00:11:10.157 11260.276 - 11319.855: 92.0655% ( 32) 00:11:10.157 11319.855 - 11379.433: 92.2790% ( 28) 00:11:10.157 11379.433 - 11439.011: 92.5229% ( 32) 00:11:10.157 11439.011 - 11498.589: 92.7744% ( 33) 00:11:10.157 11498.589 - 11558.167: 93.0412% ( 35) 00:11:10.157 11558.167 - 11617.745: 93.3003% ( 34) 00:11:10.157 11617.745 - 11677.324: 93.5518% ( 33) 00:11:10.157 11677.324 - 11736.902: 93.8034% ( 33) 00:11:10.157 11736.902 - 11796.480: 94.0091% ( 27) 00:11:10.157 11796.480 - 11856.058: 94.2454% ( 31) 00:11:10.157 11856.058 - 11915.636: 94.4817% ( 31) 00:11:10.157 11915.636 - 11975.215: 94.7332% ( 33) 00:11:10.157 11975.215 - 12034.793: 94.9619% ( 30) 00:11:10.157 12034.793 - 12094.371: 95.1677% ( 27) 00:11:10.157 12094.371 - 12153.949: 95.3735% ( 27) 00:11:10.157 12153.949 - 12213.527: 95.5564% ( 24) 00:11:10.157 12213.527 - 12273.105: 95.7317% ( 23) 00:11:10.157 12273.105 - 12332.684: 95.9146% ( 24) 00:11:10.157 12332.684 - 12392.262: 96.0976% ( 24) 00:11:10.157 12392.262 - 12451.840: 96.2805% ( 24) 00:11:10.157 12451.840 - 12511.418: 96.4787% ( 26) 00:11:10.157 12511.418 - 12570.996: 96.6463% ( 22) 00:11:10.157 12570.996 - 12630.575: 96.8140% ( 22) 00:11:10.157 12630.575 - 12690.153: 96.9436% ( 17) 00:11:10.157 12690.153 - 12749.731: 97.0732% ( 17) 00:11:10.157 12749.731 - 12809.309: 97.1875% ( 15) 00:11:10.157 12809.309 - 12868.887: 97.3323% ( 19) 00:11:10.157 12868.887 - 12928.465: 97.4238% ( 12) 00:11:10.157 12928.465 - 12988.044: 97.5152% ( 12) 00:11:10.157 12988.044 - 13047.622: 97.6067% ( 12) 00:11:10.157 13047.622 - 13107.200: 97.6448% ( 5) 00:11:10.157 13107.200 - 13166.778: 97.6905% ( 6) 00:11:10.157 13166.778 - 13226.356: 97.7134% ( 3) 00:11:10.157 13226.356 - 13285.935: 97.7515% ( 5) 00:11:10.157 13285.935 - 13345.513: 97.7896% ( 5) 00:11:10.157 13345.513 - 13405.091: 97.8201% ( 4) 00:11:10.158 13405.091 - 13464.669: 97.8582% ( 5) 00:11:10.158 13464.669 - 13524.247: 97.9192% ( 8) 00:11:10.158 13524.247 - 13583.825: 97.9726% ( 7) 00:11:10.158 13583.825 - 13643.404: 98.0412% ( 9) 00:11:10.158 13643.404 - 13702.982: 98.1098% ( 9) 00:11:10.158 13702.982 - 13762.560: 98.1707% ( 8) 00:11:10.158 13762.560 - 13822.138: 98.2317% ( 8) 00:11:10.158 13822.138 - 13881.716: 98.3003% ( 9) 00:11:10.158 13881.716 - 13941.295: 98.3613% ( 8) 00:11:10.158 13941.295 - 14000.873: 98.4223% ( 8) 00:11:10.158 14000.873 - 14060.451: 98.4832% ( 8) 00:11:10.158 14060.451 - 14120.029: 98.5366% ( 7) 00:11:10.158 14120.029 - 14179.607: 98.5823% ( 6) 00:11:10.158 14179.607 - 14239.185: 98.6357% ( 7) 00:11:10.158 14239.185 - 14298.764: 98.6890% ( 7) 00:11:10.158 14298.764 - 14358.342: 98.7348% ( 6) 00:11:10.158 14358.342 - 14417.920: 98.7881% ( 7) 00:11:10.158 14417.920 - 14477.498: 98.8338% ( 6) 00:11:10.158 14477.498 - 14537.076: 98.8796% ( 6) 00:11:10.158 14537.076 - 14596.655: 98.9253% ( 6) 00:11:10.158 14596.655 - 14656.233: 98.9710% ( 6) 00:11:10.158 14656.233 - 14715.811: 98.9939% ( 3) 00:11:10.158 14715.811 - 14775.389: 99.0168% ( 3) 00:11:10.158 14775.389 - 14834.967: 99.0244% ( 1) 00:11:10.158 20375.738 - 20494.895: 99.0473% ( 3) 00:11:10.158 20494.895 - 20614.051: 99.0625% ( 2) 00:11:10.158 20614.051 - 20733.207: 99.0854% ( 3) 00:11:10.158 20733.207 - 20852.364: 99.1159% ( 4) 00:11:10.158 20852.364 - 20971.520: 99.1311% ( 2) 00:11:10.158 20971.520 - 21090.676: 99.1540% ( 3) 00:11:10.158 21090.676 - 21209.833: 99.1768% ( 3) 00:11:10.158 21209.833 - 21328.989: 99.1921% ( 2) 00:11:10.158 21328.989 - 21448.145: 99.2149% ( 3) 00:11:10.158 21448.145 - 21567.302: 99.2378% ( 3) 00:11:10.158 21567.302 - 21686.458: 99.2607% ( 3) 00:11:10.158 21686.458 - 21805.615: 99.2835% ( 3) 00:11:10.158 21805.615 - 21924.771: 99.3064% ( 3) 00:11:10.158 21924.771 - 22043.927: 99.3293% ( 3) 00:11:10.158 22043.927 - 22163.084: 99.3521% ( 3) 00:11:10.158 22163.084 - 22282.240: 99.3750% ( 3) 00:11:10.158 22282.240 - 22401.396: 99.3902% ( 2) 00:11:10.158 22401.396 - 22520.553: 99.4131% ( 3) 00:11:10.158 22520.553 - 22639.709: 99.4360% ( 3) 00:11:10.158 22639.709 - 22758.865: 99.4588% ( 3) 00:11:10.158 22758.865 - 22878.022: 99.4817% ( 3) 00:11:10.158 22878.022 - 22997.178: 99.5046% ( 3) 00:11:10.158 22997.178 - 23116.335: 99.5122% ( 1) 00:11:10.158 28716.684 - 28835.840: 99.5274% ( 2) 00:11:10.158 28835.840 - 28954.996: 99.5503% ( 3) 00:11:10.158 28954.996 - 29074.153: 99.5732% ( 3) 00:11:10.158 29074.153 - 29193.309: 99.5960% ( 3) 00:11:10.158 29193.309 - 29312.465: 99.6189% ( 3) 00:11:10.158 29312.465 - 29431.622: 99.6341% ( 2) 00:11:10.158 29431.622 - 29550.778: 99.6570% ( 3) 00:11:10.158 29550.778 - 29669.935: 99.6799% ( 3) 00:11:10.158 29669.935 - 29789.091: 99.7027% ( 3) 00:11:10.158 29789.091 - 29908.247: 99.7256% ( 3) 00:11:10.158 29908.247 - 30027.404: 99.7485% ( 3) 00:11:10.158 30027.404 - 30146.560: 99.7713% ( 3) 00:11:10.158 30146.560 - 30265.716: 99.7942% ( 3) 00:11:10.158 30265.716 - 30384.873: 99.8171% ( 3) 00:11:10.158 30384.873 - 30504.029: 99.8399% ( 3) 00:11:10.158 30504.029 - 30742.342: 99.8933% ( 7) 00:11:10.158 30742.342 - 30980.655: 99.9390% ( 6) 00:11:10.158 30980.655 - 31218.967: 99.9848% ( 6) 00:11:10.158 31218.967 - 31457.280: 100.0000% ( 2) 00:11:10.158 00:11:10.158 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:10.158 ============================================================================== 00:11:10.158 Range in us Cumulative IO count 00:11:10.158 7864.320 - 7923.898: 0.0076% ( 1) 00:11:10.158 7923.898 - 7983.476: 0.0610% ( 7) 00:11:10.158 7983.476 - 8043.055: 0.1524% ( 12) 00:11:10.158 8043.055 - 8102.633: 0.2896% ( 18) 00:11:10.158 8102.633 - 8162.211: 0.4878% ( 26) 00:11:10.158 8162.211 - 8221.789: 0.8994% ( 54) 00:11:10.158 8221.789 - 8281.367: 1.5930% ( 91) 00:11:10.158 8281.367 - 8340.945: 2.6067% ( 133) 00:11:10.158 8340.945 - 8400.524: 4.0320% ( 187) 00:11:10.158 8400.524 - 8460.102: 5.9146% ( 247) 00:11:10.158 8460.102 - 8519.680: 8.1555% ( 294) 00:11:10.158 8519.680 - 8579.258: 10.8460% ( 353) 00:11:10.158 8579.258 - 8638.836: 13.7500% ( 381) 00:11:10.158 8638.836 - 8698.415: 16.8293% ( 404) 00:11:10.158 8698.415 - 8757.993: 20.1220% ( 432) 00:11:10.158 8757.993 - 8817.571: 23.3994% ( 430) 00:11:10.158 8817.571 - 8877.149: 26.8064% ( 447) 00:11:10.158 8877.149 - 8936.727: 30.3049% ( 459) 00:11:10.158 8936.727 - 8996.305: 33.6814% ( 443) 00:11:10.158 8996.305 - 9055.884: 37.2027% ( 462) 00:11:10.158 9055.884 - 9115.462: 40.6555% ( 453) 00:11:10.158 9115.462 - 9175.040: 44.0091% ( 440) 00:11:10.158 9175.040 - 9234.618: 47.2180% ( 421) 00:11:10.158 9234.618 - 9294.196: 50.3430% ( 410) 00:11:10.158 9294.196 - 9353.775: 53.3841% ( 399) 00:11:10.158 9353.775 - 9413.353: 56.1433% ( 362) 00:11:10.158 9413.353 - 9472.931: 58.7195% ( 338) 00:11:10.158 9472.931 - 9532.509: 61.2576% ( 333) 00:11:10.158 9532.509 - 9592.087: 63.6433% ( 313) 00:11:10.158 9592.087 - 9651.665: 65.6174% ( 259) 00:11:10.158 9651.665 - 9711.244: 67.4390% ( 239) 00:11:10.158 9711.244 - 9770.822: 69.2378% ( 236) 00:11:10.158 9770.822 - 9830.400: 70.8232% ( 208) 00:11:10.158 9830.400 - 9889.978: 72.4085% ( 208) 00:11:10.158 9889.978 - 9949.556: 73.8262% ( 186) 00:11:10.158 9949.556 - 10009.135: 75.0305% ( 158) 00:11:10.158 10009.135 - 10068.713: 76.2957% ( 166) 00:11:10.158 10068.713 - 10128.291: 77.4771% ( 155) 00:11:10.158 10128.291 - 10187.869: 78.6738% ( 157) 00:11:10.158 10187.869 - 10247.447: 79.9009% ( 161) 00:11:10.158 10247.447 - 10307.025: 81.1204% ( 160) 00:11:10.158 10307.025 - 10366.604: 82.2180% ( 144) 00:11:10.158 10366.604 - 10426.182: 83.3308% ( 146) 00:11:10.158 10426.182 - 10485.760: 84.4284% ( 144) 00:11:10.158 10485.760 - 10545.338: 85.4345% ( 132) 00:11:10.158 10545.338 - 10604.916: 86.3415% ( 119) 00:11:10.158 10604.916 - 10664.495: 87.2256% ( 116) 00:11:10.158 10664.495 - 10724.073: 87.9954% ( 101) 00:11:10.158 10724.073 - 10783.651: 88.6509% ( 86) 00:11:10.158 10783.651 - 10843.229: 89.1616% ( 67) 00:11:10.158 10843.229 - 10902.807: 89.6799% ( 68) 00:11:10.158 10902.807 - 10962.385: 90.1372% ( 60) 00:11:10.158 10962.385 - 11021.964: 90.5793% ( 58) 00:11:10.158 11021.964 - 11081.542: 90.9223% ( 45) 00:11:10.158 11081.542 - 11141.120: 91.2119% ( 38) 00:11:10.158 11141.120 - 11200.698: 91.4939% ( 37) 00:11:10.158 11200.698 - 11260.276: 91.7454% ( 33) 00:11:10.158 11260.276 - 11319.855: 92.0046% ( 34) 00:11:10.158 11319.855 - 11379.433: 92.2485% ( 32) 00:11:10.158 11379.433 - 11439.011: 92.4695% ( 29) 00:11:10.158 11439.011 - 11498.589: 92.6753% ( 27) 00:11:10.159 11498.589 - 11558.167: 92.8887% ( 28) 00:11:10.159 11558.167 - 11617.745: 93.0945% ( 27) 00:11:10.159 11617.745 - 11677.324: 93.3003% ( 27) 00:11:10.159 11677.324 - 11736.902: 93.5747% ( 36) 00:11:10.159 11736.902 - 11796.480: 93.8110% ( 31) 00:11:10.159 11796.480 - 11856.058: 94.0854% ( 36) 00:11:10.159 11856.058 - 11915.636: 94.3750% ( 38) 00:11:10.159 11915.636 - 11975.215: 94.6113% ( 31) 00:11:10.159 11975.215 - 12034.793: 94.8628% ( 33) 00:11:10.159 12034.793 - 12094.371: 95.0991% ( 31) 00:11:10.159 12094.371 - 12153.949: 95.3430% ( 32) 00:11:10.159 12153.949 - 12213.527: 95.5412% ( 26) 00:11:10.159 12213.527 - 12273.105: 95.7393% ( 26) 00:11:10.159 12273.105 - 12332.684: 95.9223% ( 24) 00:11:10.159 12332.684 - 12392.262: 96.0899% ( 22) 00:11:10.159 12392.262 - 12451.840: 96.2348% ( 19) 00:11:10.159 12451.840 - 12511.418: 96.4024% ( 22) 00:11:10.159 12511.418 - 12570.996: 96.5625% ( 21) 00:11:10.159 12570.996 - 12630.575: 96.6997% ( 18) 00:11:10.159 12630.575 - 12690.153: 96.8369% ( 18) 00:11:10.159 12690.153 - 12749.731: 96.9665% ( 17) 00:11:10.159 12749.731 - 12809.309: 97.0808% ( 15) 00:11:10.159 12809.309 - 12868.887: 97.1951% ( 15) 00:11:10.159 12868.887 - 12928.465: 97.2866% ( 12) 00:11:10.159 12928.465 - 12988.044: 97.3857% ( 13) 00:11:10.159 12988.044 - 13047.622: 97.4619% ( 10) 00:11:10.159 13047.622 - 13107.200: 97.5229% ( 8) 00:11:10.159 13107.200 - 13166.778: 97.5534% ( 4) 00:11:10.159 13166.778 - 13226.356: 97.5838% ( 4) 00:11:10.159 13226.356 - 13285.935: 97.6143% ( 4) 00:11:10.159 13285.935 - 13345.513: 97.6753% ( 8) 00:11:10.159 13345.513 - 13405.091: 97.7363% ( 8) 00:11:10.159 13405.091 - 13464.669: 97.8049% ( 9) 00:11:10.159 13464.669 - 13524.247: 97.8659% ( 8) 00:11:10.159 13524.247 - 13583.825: 97.9192% ( 7) 00:11:10.159 13583.825 - 13643.404: 97.9573% ( 5) 00:11:10.159 13643.404 - 13702.982: 98.0030% ( 6) 00:11:10.159 13702.982 - 13762.560: 98.0488% ( 6) 00:11:10.159 13762.560 - 13822.138: 98.0869% ( 5) 00:11:10.159 13822.138 - 13881.716: 98.1250% ( 5) 00:11:10.159 13881.716 - 13941.295: 98.1784% ( 7) 00:11:10.159 13941.295 - 14000.873: 98.2393% ( 8) 00:11:10.159 14000.873 - 14060.451: 98.3155% ( 10) 00:11:10.159 14060.451 - 14120.029: 98.3841% ( 9) 00:11:10.159 14120.029 - 14179.607: 98.4451% ( 8) 00:11:10.159 14179.607 - 14239.185: 98.5061% ( 8) 00:11:10.159 14239.185 - 14298.764: 98.5747% ( 9) 00:11:10.159 14298.764 - 14358.342: 98.6357% ( 8) 00:11:10.159 14358.342 - 14417.920: 98.7043% ( 9) 00:11:10.159 14417.920 - 14477.498: 98.7652% ( 8) 00:11:10.159 14477.498 - 14537.076: 98.8034% ( 5) 00:11:10.159 14537.076 - 14596.655: 98.8262% ( 3) 00:11:10.159 14596.655 - 14656.233: 98.8567% ( 4) 00:11:10.159 14656.233 - 14715.811: 98.8796% ( 3) 00:11:10.159 14715.811 - 14775.389: 98.9024% ( 3) 00:11:10.159 14775.389 - 14834.967: 98.9253% ( 3) 00:11:10.159 14834.967 - 14894.545: 98.9558% ( 4) 00:11:10.159 14894.545 - 14954.124: 98.9710% ( 2) 00:11:10.159 14954.124 - 15013.702: 98.9939% ( 3) 00:11:10.159 15013.702 - 15073.280: 99.0168% ( 3) 00:11:10.159 15073.280 - 15132.858: 99.0244% ( 1) 00:11:10.159 17515.985 - 17635.142: 99.0320% ( 1) 00:11:10.159 17635.142 - 17754.298: 99.0549% ( 3) 00:11:10.159 17754.298 - 17873.455: 99.0777% ( 3) 00:11:10.159 17873.455 - 17992.611: 99.1006% ( 3) 00:11:10.159 17992.611 - 18111.767: 99.1159% ( 2) 00:11:10.159 18111.767 - 18230.924: 99.1387% ( 3) 00:11:10.159 18230.924 - 18350.080: 99.1616% ( 3) 00:11:10.159 18350.080 - 18469.236: 99.1845% ( 3) 00:11:10.159 18469.236 - 18588.393: 99.2073% ( 3) 00:11:10.159 18588.393 - 18707.549: 99.2302% ( 3) 00:11:10.159 18707.549 - 18826.705: 99.2530% ( 3) 00:11:10.159 18826.705 - 18945.862: 99.2759% ( 3) 00:11:10.159 18945.862 - 19065.018: 99.2912% ( 2) 00:11:10.159 19065.018 - 19184.175: 99.3140% ( 3) 00:11:10.159 19184.175 - 19303.331: 99.3369% ( 3) 00:11:10.159 19303.331 - 19422.487: 99.3521% ( 2) 00:11:10.159 19422.487 - 19541.644: 99.3750% ( 3) 00:11:10.159 19541.644 - 19660.800: 99.3979% ( 3) 00:11:10.159 19660.800 - 19779.956: 99.4207% ( 3) 00:11:10.159 19779.956 - 19899.113: 99.4436% ( 3) 00:11:10.159 19899.113 - 20018.269: 99.4665% ( 3) 00:11:10.159 20018.269 - 20137.425: 99.4893% ( 3) 00:11:10.159 20137.425 - 20256.582: 99.5122% ( 3) 00:11:10.159 25976.087 - 26095.244: 99.5351% ( 3) 00:11:10.159 26095.244 - 26214.400: 99.5579% ( 3) 00:11:10.159 26214.400 - 26333.556: 99.5732% ( 2) 00:11:10.159 26333.556 - 26452.713: 99.5960% ( 3) 00:11:10.159 26452.713 - 26571.869: 99.6189% ( 3) 00:11:10.159 26571.869 - 26691.025: 99.6418% ( 3) 00:11:10.159 26691.025 - 26810.182: 99.6646% ( 3) 00:11:10.159 26810.182 - 26929.338: 99.6875% ( 3) 00:11:10.159 26929.338 - 27048.495: 99.7104% ( 3) 00:11:10.159 27048.495 - 27167.651: 99.7332% ( 3) 00:11:10.159 27167.651 - 27286.807: 99.7561% ( 3) 00:11:10.159 27286.807 - 27405.964: 99.7866% ( 4) 00:11:10.159 27405.964 - 27525.120: 99.8095% ( 3) 00:11:10.159 27525.120 - 27644.276: 99.8323% ( 3) 00:11:10.159 27644.276 - 27763.433: 99.8476% ( 2) 00:11:10.159 27763.433 - 27882.589: 99.8704% ( 3) 00:11:10.159 27882.589 - 28001.745: 99.8933% ( 3) 00:11:10.159 28001.745 - 28120.902: 99.9162% ( 3) 00:11:10.159 28120.902 - 28240.058: 99.9390% ( 3) 00:11:10.159 28240.058 - 28359.215: 99.9619% ( 3) 00:11:10.159 28359.215 - 28478.371: 99.9848% ( 3) 00:11:10.159 28478.371 - 28597.527: 100.0000% ( 2) 00:11:10.159 00:11:10.159 18:03:02 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:11.538 Initializing NVMe Controllers 00:11:11.538 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:11.538 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:11.538 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:11.538 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:11.538 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:11.538 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:11.538 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:11.538 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:11.538 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:11.538 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:11.538 Initialization complete. Launching workers. 00:11:11.538 ======================================================== 00:11:11.538 Latency(us) 00:11:11.538 Device Information : IOPS MiB/s Average min max 00:11:11.538 PCIE (0000:00:10.0) NSID 1 from core 0: 6422.47 75.26 20000.02 16654.53 49988.15 00:11:11.538 PCIE (0000:00:11.0) NSID 1 from core 0: 6422.47 75.26 19955.83 17164.32 47550.90 00:11:11.538 PCIE (0000:00:13.0) NSID 1 from core 0: 6422.47 75.26 19911.25 16963.77 45486.88 00:11:11.538 PCIE (0000:00:12.0) NSID 1 from core 0: 6422.47 75.26 19866.97 17066.44 43066.57 00:11:11.538 PCIE (0000:00:12.0) NSID 2 from core 0: 6422.47 75.26 19819.26 17071.91 40558.78 00:11:11.538 PCIE (0000:00:12.0) NSID 3 from core 0: 6422.47 75.26 19771.61 17150.20 37926.86 00:11:11.538 ======================================================== 00:11:11.538 Total : 38534.83 451.58 19887.49 16654.53 49988.15 00:11:11.538 00:11:11.538 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:11.538 ================================================================================= 00:11:11.538 1.00000% : 17396.829us 00:11:11.538 10.00000% : 18111.767us 00:11:11.538 25.00000% : 18826.705us 00:11:11.538 50.00000% : 19541.644us 00:11:11.538 75.00000% : 20494.895us 00:11:11.538 90.00000% : 21448.145us 00:11:11.538 95.00000% : 21805.615us 00:11:11.538 98.00000% : 23116.335us 00:11:11.538 99.00000% : 38130.036us 00:11:11.538 99.50000% : 48139.171us 00:11:11.538 99.90000% : 49807.360us 00:11:11.538 99.99000% : 50045.673us 00:11:11.538 99.99900% : 50045.673us 00:11:11.538 99.99990% : 50045.673us 00:11:11.538 99.99999% : 50045.673us 00:11:11.538 00:11:11.538 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:11.538 ================================================================================= 00:11:11.538 1.00000% : 17754.298us 00:11:11.538 10.00000% : 18469.236us 00:11:11.538 25.00000% : 19065.018us 00:11:11.538 50.00000% : 19660.800us 00:11:11.538 75.00000% : 20137.425us 00:11:11.538 90.00000% : 20852.364us 00:11:11.538 95.00000% : 21328.989us 00:11:11.538 98.00000% : 22520.553us 00:11:11.538 99.00000% : 36223.535us 00:11:11.538 99.50000% : 45994.356us 00:11:11.538 99.90000% : 47424.233us 00:11:11.538 99.99000% : 47662.545us 00:11:11.538 99.99900% : 47662.545us 00:11:11.538 99.99990% : 47662.545us 00:11:11.538 99.99999% : 47662.545us 00:11:11.538 00:11:11.538 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:11.538 ================================================================================= 00:11:11.538 1.00000% : 17754.298us 00:11:11.538 10.00000% : 18350.080us 00:11:11.538 25.00000% : 19065.018us 00:11:11.538 50.00000% : 19660.800us 00:11:11.538 75.00000% : 20137.425us 00:11:11.538 90.00000% : 20852.364us 00:11:11.538 95.00000% : 21328.989us 00:11:11.538 98.00000% : 22401.396us 00:11:11.538 99.00000% : 34078.720us 00:11:11.538 99.50000% : 43849.542us 00:11:11.538 99.90000% : 45279.418us 00:11:11.538 99.99000% : 45517.731us 00:11:11.538 99.99900% : 45517.731us 00:11:11.538 99.99990% : 45517.731us 00:11:11.538 99.99999% : 45517.731us 00:11:11.538 00:11:11.538 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:11.538 ================================================================================= 00:11:11.538 1.00000% : 17754.298us 00:11:11.538 10.00000% : 18469.236us 00:11:11.538 25.00000% : 19065.018us 00:11:11.538 50.00000% : 19779.956us 00:11:11.538 75.00000% : 20137.425us 00:11:11.538 90.00000% : 20733.207us 00:11:11.538 95.00000% : 21209.833us 00:11:11.538 98.00000% : 22282.240us 00:11:11.538 99.00000% : 31457.280us 00:11:11.538 99.50000% : 41466.415us 00:11:11.538 99.90000% : 42896.291us 00:11:11.538 99.99000% : 43134.604us 00:11:11.538 99.99900% : 43134.604us 00:11:11.538 99.99990% : 43134.604us 00:11:11.538 99.99999% : 43134.604us 00:11:11.538 00:11:11.538 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:11.538 ================================================================================= 00:11:11.538 1.00000% : 17754.298us 00:11:11.538 10.00000% : 18469.236us 00:11:11.538 25.00000% : 19065.018us 00:11:11.538 50.00000% : 19779.956us 00:11:11.538 75.00000% : 20137.425us 00:11:11.538 90.00000% : 20733.207us 00:11:11.538 95.00000% : 21209.833us 00:11:11.538 98.00000% : 22282.240us 00:11:11.538 99.00000% : 29312.465us 00:11:11.538 99.50000% : 38844.975us 00:11:11.538 99.90000% : 40274.851us 00:11:11.538 99.99000% : 40751.476us 00:11:11.538 99.99900% : 40751.476us 00:11:11.538 99.99990% : 40751.476us 00:11:11.538 99.99999% : 40751.476us 00:11:11.538 00:11:11.538 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:11.538 ================================================================================= 00:11:11.538 1.00000% : 17754.298us 00:11:11.538 10.00000% : 18469.236us 00:11:11.538 25.00000% : 19065.018us 00:11:11.538 50.00000% : 19660.800us 00:11:11.538 75.00000% : 20137.425us 00:11:11.538 90.00000% : 20733.207us 00:11:11.538 95.00000% : 21209.833us 00:11:11.538 98.00000% : 22401.396us 00:11:11.538 99.00000% : 26452.713us 00:11:11.538 99.50000% : 36223.535us 00:11:11.538 99.90000% : 37653.411us 00:11:11.538 99.99000% : 38130.036us 00:11:11.538 99.99900% : 38130.036us 00:11:11.538 99.99990% : 38130.036us 00:11:11.539 99.99999% : 38130.036us 00:11:11.539 00:11:11.539 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:11.539 ============================================================================== 00:11:11.539 Range in us Cumulative IO count 00:11:11.539 16562.735 - 16681.891: 0.0155% ( 1) 00:11:11.539 16681.891 - 16801.047: 0.0309% ( 1) 00:11:11.539 16920.204 - 17039.360: 0.1856% ( 10) 00:11:11.539 17039.360 - 17158.516: 0.3558% ( 11) 00:11:11.539 17158.516 - 17277.673: 0.9437% ( 38) 00:11:11.539 17277.673 - 17396.829: 1.9647% ( 66) 00:11:11.539 17396.829 - 17515.985: 3.2024% ( 80) 00:11:11.539 17515.985 - 17635.142: 4.7494% ( 100) 00:11:11.539 17635.142 - 17754.298: 6.4047% ( 107) 00:11:11.539 17754.298 - 17873.455: 7.8280% ( 92) 00:11:11.539 17873.455 - 17992.611: 9.6844% ( 120) 00:11:11.539 17992.611 - 18111.767: 11.4790% ( 116) 00:11:11.539 18111.767 - 18230.924: 13.3818% ( 123) 00:11:11.539 18230.924 - 18350.080: 15.4703% ( 135) 00:11:11.539 18350.080 - 18469.236: 17.8063% ( 151) 00:11:11.539 18469.236 - 18588.393: 20.3744% ( 166) 00:11:11.539 18588.393 - 18707.549: 23.4066% ( 196) 00:11:11.539 18707.549 - 18826.705: 27.2432% ( 248) 00:11:11.539 18826.705 - 18945.862: 30.6467% ( 220) 00:11:11.539 18945.862 - 19065.018: 34.3441% ( 239) 00:11:11.539 19065.018 - 19184.175: 38.8769% ( 293) 00:11:11.539 19184.175 - 19303.331: 43.5953% ( 305) 00:11:11.539 19303.331 - 19422.487: 47.5557% ( 256) 00:11:11.539 19422.487 - 19541.644: 51.5934% ( 261) 00:11:11.539 19541.644 - 19660.800: 55.8014% ( 272) 00:11:11.539 19660.800 - 19779.956: 59.5297% ( 241) 00:11:11.539 19779.956 - 19899.113: 63.4437% ( 253) 00:11:11.539 19899.113 - 20018.269: 66.7698% ( 215) 00:11:11.539 20018.269 - 20137.425: 70.0186% ( 210) 00:11:11.539 20137.425 - 20256.582: 72.6640% ( 171) 00:11:11.539 20256.582 - 20375.738: 74.7370% ( 134) 00:11:11.539 20375.738 - 20494.895: 76.5934% ( 120) 00:11:11.539 20494.895 - 20614.051: 78.1714% ( 102) 00:11:11.539 20614.051 - 20733.207: 79.7803% ( 104) 00:11:11.539 20733.207 - 20852.364: 81.5903% ( 117) 00:11:11.539 20852.364 - 20971.520: 83.5396% ( 126) 00:11:11.539 20971.520 - 21090.676: 85.4579% ( 124) 00:11:11.539 21090.676 - 21209.833: 87.4691% ( 130) 00:11:11.539 21209.833 - 21328.989: 89.3100% ( 119) 00:11:11.539 21328.989 - 21448.145: 90.9344% ( 105) 00:11:11.539 21448.145 - 21567.302: 92.7599% ( 118) 00:11:11.539 21567.302 - 21686.458: 93.9511% ( 77) 00:11:11.539 21686.458 - 21805.615: 95.0186% ( 69) 00:11:11.539 21805.615 - 21924.771: 95.8540% ( 54) 00:11:11.539 21924.771 - 22043.927: 96.5656% ( 46) 00:11:11.539 22043.927 - 22163.084: 97.1225% ( 36) 00:11:11.539 22163.084 - 22282.240: 97.3546% ( 15) 00:11:11.539 22282.240 - 22401.396: 97.5248% ( 11) 00:11:11.539 22401.396 - 22520.553: 97.6176% ( 6) 00:11:11.539 22520.553 - 22639.709: 97.7104% ( 6) 00:11:11.539 22639.709 - 22758.865: 97.8342% ( 8) 00:11:11.539 22758.865 - 22878.022: 97.8806% ( 3) 00:11:11.539 22878.022 - 22997.178: 97.9579% ( 5) 00:11:11.539 22997.178 - 23116.335: 98.0198% ( 4) 00:11:11.539 34078.720 - 34317.033: 98.0353% ( 1) 00:11:11.539 34317.033 - 34555.345: 98.0972% ( 4) 00:11:11.539 34555.345 - 34793.658: 98.1590% ( 4) 00:11:11.539 34793.658 - 35031.971: 98.2209% ( 4) 00:11:11.539 35031.971 - 35270.284: 98.2828% ( 4) 00:11:11.539 35270.284 - 35508.596: 98.3447% ( 4) 00:11:11.539 35508.596 - 35746.909: 98.4066% ( 4) 00:11:11.539 35746.909 - 35985.222: 98.4839% ( 5) 00:11:11.539 35985.222 - 36223.535: 98.5458% ( 4) 00:11:11.539 36223.535 - 36461.847: 98.5922% ( 3) 00:11:11.539 36461.847 - 36700.160: 98.6696% ( 5) 00:11:11.539 36700.160 - 36938.473: 98.7005% ( 2) 00:11:11.539 36938.473 - 37176.785: 98.7933% ( 6) 00:11:11.539 37176.785 - 37415.098: 98.8397% ( 3) 00:11:11.539 37415.098 - 37653.411: 98.9171% ( 5) 00:11:11.539 37653.411 - 37891.724: 98.9790% ( 4) 00:11:11.539 37891.724 - 38130.036: 99.0099% ( 2) 00:11:11.539 46232.669 - 46470.982: 99.0718% ( 4) 00:11:11.539 46470.982 - 46709.295: 99.1337% ( 4) 00:11:11.539 46709.295 - 46947.607: 99.1801% ( 3) 00:11:11.539 46947.607 - 47185.920: 99.2574% ( 5) 00:11:11.539 47185.920 - 47424.233: 99.3038% ( 3) 00:11:11.539 47424.233 - 47662.545: 99.3812% ( 5) 00:11:11.539 47662.545 - 47900.858: 99.4431% ( 4) 00:11:11.539 47900.858 - 48139.171: 99.5050% ( 4) 00:11:11.539 48139.171 - 48377.484: 99.5668% ( 4) 00:11:11.539 48377.484 - 48615.796: 99.6442% ( 5) 00:11:11.539 48615.796 - 48854.109: 99.7061% ( 4) 00:11:11.539 48854.109 - 49092.422: 99.7525% ( 3) 00:11:11.539 49092.422 - 49330.735: 99.8298% ( 5) 00:11:11.539 49330.735 - 49569.047: 99.8917% ( 4) 00:11:11.539 49569.047 - 49807.360: 99.9536% ( 4) 00:11:11.539 49807.360 - 50045.673: 100.0000% ( 3) 00:11:11.539 00:11:11.539 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:11.539 ============================================================================== 00:11:11.539 Range in us Cumulative IO count 00:11:11.539 17158.516 - 17277.673: 0.1083% ( 7) 00:11:11.539 17277.673 - 17396.829: 0.2630% ( 10) 00:11:11.539 17396.829 - 17515.985: 0.4332% ( 11) 00:11:11.539 17515.985 - 17635.142: 0.8818% ( 29) 00:11:11.539 17635.142 - 17754.298: 1.5161% ( 41) 00:11:11.539 17754.298 - 17873.455: 2.2896% ( 50) 00:11:11.539 17873.455 - 17992.611: 3.4499% ( 75) 00:11:11.539 17992.611 - 18111.767: 5.0897% ( 106) 00:11:11.539 18111.767 - 18230.924: 7.0699% ( 128) 00:11:11.539 18230.924 - 18350.080: 9.2203% ( 139) 00:11:11.539 18350.080 - 18469.236: 12.2834% ( 198) 00:11:11.539 18469.236 - 18588.393: 15.9963% ( 240) 00:11:11.539 18588.393 - 18707.549: 18.8274% ( 183) 00:11:11.539 18707.549 - 18826.705: 21.3181% ( 161) 00:11:11.539 18826.705 - 18945.862: 23.2673% ( 126) 00:11:11.539 18945.862 - 19065.018: 25.9282% ( 172) 00:11:11.539 19065.018 - 19184.175: 30.5229% ( 297) 00:11:11.539 19184.175 - 19303.331: 34.4678% ( 255) 00:11:11.539 19303.331 - 19422.487: 40.4239% ( 385) 00:11:11.539 19422.487 - 19541.644: 45.9777% ( 359) 00:11:11.539 19541.644 - 19660.800: 52.0266% ( 391) 00:11:11.539 19660.800 - 19779.956: 58.8026% ( 438) 00:11:11.539 19779.956 - 19899.113: 65.6714% ( 444) 00:11:11.539 19899.113 - 20018.269: 70.9468% ( 341) 00:11:11.539 20018.269 - 20137.425: 75.4486% ( 291) 00:11:11.539 20137.425 - 20256.582: 80.5074% ( 327) 00:11:11.539 20256.582 - 20375.738: 84.0656% ( 230) 00:11:11.539 20375.738 - 20494.895: 86.1231% ( 133) 00:11:11.539 20494.895 - 20614.051: 88.0724% ( 126) 00:11:11.539 20614.051 - 20733.207: 89.6040% ( 99) 00:11:11.539 20733.207 - 20852.364: 91.3830% ( 115) 00:11:11.539 20852.364 - 20971.520: 92.7135% ( 86) 00:11:11.539 20971.520 - 21090.676: 93.9202% ( 78) 00:11:11.539 21090.676 - 21209.833: 94.8484% ( 60) 00:11:11.539 21209.833 - 21328.989: 95.7457% ( 58) 00:11:11.539 21328.989 - 21448.145: 96.7512% ( 65) 00:11:11.539 21448.145 - 21567.302: 97.1225% ( 24) 00:11:11.539 21567.302 - 21686.458: 97.3391% ( 14) 00:11:11.539 21686.458 - 21805.615: 97.5093% ( 11) 00:11:11.539 21805.615 - 21924.771: 97.6485% ( 9) 00:11:11.539 21924.771 - 22043.927: 97.7723% ( 8) 00:11:11.539 22043.927 - 22163.084: 97.8342% ( 4) 00:11:11.539 22163.084 - 22282.240: 97.9115% ( 5) 00:11:11.539 22282.240 - 22401.396: 97.9579% ( 3) 00:11:11.539 22401.396 - 22520.553: 98.0043% ( 3) 00:11:11.539 22520.553 - 22639.709: 98.0198% ( 1) 00:11:11.539 32648.844 - 32887.156: 98.0662% ( 3) 00:11:11.539 32887.156 - 33125.469: 98.1281% ( 4) 00:11:11.539 33125.469 - 33363.782: 98.2054% ( 5) 00:11:11.539 33363.782 - 33602.095: 98.2673% ( 4) 00:11:11.539 33602.095 - 33840.407: 98.3447% ( 5) 00:11:11.539 33840.407 - 34078.720: 98.4220% ( 5) 00:11:11.539 34078.720 - 34317.033: 98.4839% ( 4) 00:11:11.539 34317.033 - 34555.345: 98.5458% ( 4) 00:11:11.539 34555.345 - 34793.658: 98.6231% ( 5) 00:11:11.539 34793.658 - 35031.971: 98.7005% ( 5) 00:11:11.539 35031.971 - 35270.284: 98.7624% ( 4) 00:11:11.539 35270.284 - 35508.596: 98.8243% ( 4) 00:11:11.539 35508.596 - 35746.909: 98.9016% ( 5) 00:11:11.539 35746.909 - 35985.222: 98.9790% ( 5) 00:11:11.539 35985.222 - 36223.535: 99.0099% ( 2) 00:11:11.539 44087.855 - 44326.167: 99.0254% ( 1) 00:11:11.539 44326.167 - 44564.480: 99.1027% ( 5) 00:11:11.539 44564.480 - 44802.793: 99.1646% ( 4) 00:11:11.539 44802.793 - 45041.105: 99.2265% ( 4) 00:11:11.539 45041.105 - 45279.418: 99.3038% ( 5) 00:11:11.539 45279.418 - 45517.731: 99.3657% ( 4) 00:11:11.539 45517.731 - 45756.044: 99.4431% ( 5) 00:11:11.539 45756.044 - 45994.356: 99.5204% ( 5) 00:11:11.539 45994.356 - 46232.669: 99.5823% ( 4) 00:11:11.539 46232.669 - 46470.982: 99.6597% ( 5) 00:11:11.539 46470.982 - 46709.295: 99.7370% ( 5) 00:11:11.539 46709.295 - 46947.607: 99.7989% ( 4) 00:11:11.539 46947.607 - 47185.920: 99.8762% ( 5) 00:11:11.539 47185.920 - 47424.233: 99.9536% ( 5) 00:11:11.539 47424.233 - 47662.545: 100.0000% ( 3) 00:11:11.539 00:11:11.540 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:11.540 ============================================================================== 00:11:11.540 Range in us Cumulative IO count 00:11:11.540 16920.204 - 17039.360: 0.0155% ( 1) 00:11:11.540 17039.360 - 17158.516: 0.0309% ( 1) 00:11:11.540 17158.516 - 17277.673: 0.0774% ( 3) 00:11:11.540 17277.673 - 17396.829: 0.1856% ( 7) 00:11:11.540 17396.829 - 17515.985: 0.3713% ( 12) 00:11:11.540 17515.985 - 17635.142: 0.7890% ( 27) 00:11:11.540 17635.142 - 17754.298: 1.6399% ( 55) 00:11:11.540 17754.298 - 17873.455: 2.6918% ( 68) 00:11:11.540 17873.455 - 17992.611: 3.9913% ( 84) 00:11:11.540 17992.611 - 18111.767: 5.8014% ( 117) 00:11:11.540 18111.767 - 18230.924: 7.9981% ( 142) 00:11:11.540 18230.924 - 18350.080: 10.7673% ( 179) 00:11:11.540 18350.080 - 18469.236: 13.4901% ( 176) 00:11:11.540 18469.236 - 18588.393: 17.0483% ( 230) 00:11:11.540 18588.393 - 18707.549: 19.8793% ( 183) 00:11:11.540 18707.549 - 18826.705: 22.1535% ( 147) 00:11:11.540 18826.705 - 18945.862: 24.1337% ( 128) 00:11:11.540 18945.862 - 19065.018: 26.9647% ( 183) 00:11:11.540 19065.018 - 19184.175: 30.8942% ( 254) 00:11:11.540 19184.175 - 19303.331: 35.7673% ( 315) 00:11:11.540 19303.331 - 19422.487: 40.9963% ( 338) 00:11:11.540 19422.487 - 19541.644: 46.0551% ( 327) 00:11:11.540 19541.644 - 19660.800: 51.3769% ( 344) 00:11:11.540 19660.800 - 19779.956: 58.9573% ( 490) 00:11:11.540 19779.956 - 19899.113: 65.1454% ( 400) 00:11:11.540 19899.113 - 20018.269: 70.2197% ( 328) 00:11:11.540 20018.269 - 20137.425: 75.9127% ( 368) 00:11:11.540 20137.425 - 20256.582: 79.6256% ( 240) 00:11:11.540 20256.582 - 20375.738: 82.6423% ( 195) 00:11:11.540 20375.738 - 20494.895: 85.0402% ( 155) 00:11:11.540 20494.895 - 20614.051: 86.7884% ( 113) 00:11:11.540 20614.051 - 20733.207: 88.5056% ( 111) 00:11:11.540 20733.207 - 20852.364: 90.5631% ( 133) 00:11:11.540 20852.364 - 20971.520: 91.9864% ( 92) 00:11:11.540 20971.520 - 21090.676: 93.1776% ( 77) 00:11:11.540 21090.676 - 21209.833: 94.1832% ( 65) 00:11:11.540 21209.833 - 21328.989: 95.0959% ( 59) 00:11:11.540 21328.989 - 21448.145: 96.2717% ( 76) 00:11:11.540 21448.145 - 21567.302: 96.8131% ( 35) 00:11:11.540 21567.302 - 21686.458: 97.2308% ( 27) 00:11:11.540 21686.458 - 21805.615: 97.5402% ( 20) 00:11:11.540 21805.615 - 21924.771: 97.6795% ( 9) 00:11:11.540 21924.771 - 22043.927: 97.7877% ( 7) 00:11:11.540 22043.927 - 22163.084: 97.8806% ( 6) 00:11:11.540 22163.084 - 22282.240: 97.9579% ( 5) 00:11:11.540 22282.240 - 22401.396: 98.0043% ( 3) 00:11:11.540 22401.396 - 22520.553: 98.0198% ( 1) 00:11:11.540 30742.342 - 30980.655: 98.1281% ( 7) 00:11:11.540 30980.655 - 31218.967: 98.2673% ( 9) 00:11:11.540 31218.967 - 31457.280: 98.3292% ( 4) 00:11:11.540 31457.280 - 31695.593: 98.3911% ( 4) 00:11:11.540 31695.593 - 31933.905: 98.4530% ( 4) 00:11:11.540 31933.905 - 32172.218: 98.5149% ( 4) 00:11:11.540 32172.218 - 32410.531: 98.5767% ( 4) 00:11:11.540 32410.531 - 32648.844: 98.6541% ( 5) 00:11:11.540 32648.844 - 32887.156: 98.7160% ( 4) 00:11:11.540 32887.156 - 33125.469: 98.7933% ( 5) 00:11:11.540 33125.469 - 33363.782: 98.8707% ( 5) 00:11:11.540 33363.782 - 33602.095: 98.9325% ( 4) 00:11:11.540 33602.095 - 33840.407: 98.9944% ( 4) 00:11:11.540 33840.407 - 34078.720: 99.0099% ( 1) 00:11:11.540 41943.040 - 42181.353: 99.0254% ( 1) 00:11:11.540 42181.353 - 42419.665: 99.0873% ( 4) 00:11:11.540 42419.665 - 42657.978: 99.1491% ( 4) 00:11:11.540 42657.978 - 42896.291: 99.2110% ( 4) 00:11:11.540 42896.291 - 43134.604: 99.2884% ( 5) 00:11:11.540 43134.604 - 43372.916: 99.3657% ( 5) 00:11:11.540 43372.916 - 43611.229: 99.4431% ( 5) 00:11:11.540 43611.229 - 43849.542: 99.5050% ( 4) 00:11:11.540 43849.542 - 44087.855: 99.5668% ( 4) 00:11:11.540 44087.855 - 44326.167: 99.6442% ( 5) 00:11:11.540 44326.167 - 44564.480: 99.7215% ( 5) 00:11:11.540 44564.480 - 44802.793: 99.7834% ( 4) 00:11:11.540 44802.793 - 45041.105: 99.8608% ( 5) 00:11:11.540 45041.105 - 45279.418: 99.9226% ( 4) 00:11:11.540 45279.418 - 45517.731: 100.0000% ( 5) 00:11:11.540 00:11:11.540 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:11.540 ============================================================================== 00:11:11.540 Range in us Cumulative IO count 00:11:11.540 17039.360 - 17158.516: 0.0155% ( 1) 00:11:11.540 17158.516 - 17277.673: 0.0309% ( 1) 00:11:11.540 17277.673 - 17396.829: 0.0619% ( 2) 00:11:11.540 17396.829 - 17515.985: 0.2166% ( 10) 00:11:11.540 17515.985 - 17635.142: 0.4796% ( 17) 00:11:11.540 17635.142 - 17754.298: 1.1448% ( 43) 00:11:11.540 17754.298 - 17873.455: 1.9183% ( 50) 00:11:11.540 17873.455 - 17992.611: 3.1095% ( 77) 00:11:11.540 17992.611 - 18111.767: 5.0743% ( 127) 00:11:11.540 18111.767 - 18230.924: 6.8533% ( 115) 00:11:11.540 18230.924 - 18350.080: 9.4059% ( 165) 00:11:11.540 18350.080 - 18469.236: 12.2370% ( 183) 00:11:11.540 18469.236 - 18588.393: 16.5223% ( 277) 00:11:11.540 18588.393 - 18707.549: 18.8428% ( 150) 00:11:11.540 18707.549 - 18826.705: 21.7048% ( 185) 00:11:11.540 18826.705 - 18945.862: 23.8397% ( 138) 00:11:11.540 18945.862 - 19065.018: 26.3459% ( 162) 00:11:11.540 19065.018 - 19184.175: 30.3837% ( 261) 00:11:11.540 19184.175 - 19303.331: 34.5606% ( 270) 00:11:11.540 19303.331 - 19422.487: 39.9598% ( 349) 00:11:11.540 19422.487 - 19541.644: 45.0959% ( 332) 00:11:11.540 19541.644 - 19660.800: 49.0718% ( 257) 00:11:11.540 19660.800 - 19779.956: 58.3849% ( 602) 00:11:11.540 19779.956 - 19899.113: 64.0316% ( 365) 00:11:11.540 19899.113 - 20018.269: 70.0340% ( 388) 00:11:11.540 20018.269 - 20137.425: 75.3558% ( 344) 00:11:11.540 20137.425 - 20256.582: 79.9350% ( 296) 00:11:11.540 20256.582 - 20375.738: 83.6170% ( 238) 00:11:11.540 20375.738 - 20494.895: 86.5563% ( 190) 00:11:11.540 20494.895 - 20614.051: 88.3973% ( 119) 00:11:11.540 20614.051 - 20733.207: 90.2228% ( 118) 00:11:11.540 20733.207 - 20852.364: 92.0019% ( 115) 00:11:11.540 20852.364 - 20971.520: 93.3478% ( 87) 00:11:11.540 20971.520 - 21090.676: 94.4616% ( 72) 00:11:11.540 21090.676 - 21209.833: 95.4208% ( 62) 00:11:11.540 21209.833 - 21328.989: 96.1634% ( 48) 00:11:11.540 21328.989 - 21448.145: 97.2463% ( 70) 00:11:11.540 21448.145 - 21567.302: 97.4474% ( 13) 00:11:11.540 21567.302 - 21686.458: 97.6021% ( 10) 00:11:11.540 21686.458 - 21805.615: 97.7104% ( 7) 00:11:11.540 21805.615 - 21924.771: 97.7877% ( 5) 00:11:11.540 21924.771 - 22043.927: 97.8806% ( 6) 00:11:11.540 22043.927 - 22163.084: 97.9425% ( 4) 00:11:11.540 22163.084 - 22282.240: 98.0043% ( 4) 00:11:11.540 22282.240 - 22401.396: 98.0198% ( 1) 00:11:11.540 28120.902 - 28240.058: 98.0507% ( 2) 00:11:11.540 28240.058 - 28359.215: 98.1281% ( 5) 00:11:11.540 28359.215 - 28478.371: 98.1900% ( 4) 00:11:11.540 28478.371 - 28597.527: 98.2209% ( 2) 00:11:11.540 28597.527 - 28716.684: 98.2519% ( 2) 00:11:11.540 28716.684 - 28835.840: 98.2828% ( 2) 00:11:11.540 28835.840 - 28954.996: 98.3137% ( 2) 00:11:11.540 28954.996 - 29074.153: 98.3447% ( 2) 00:11:11.540 29074.153 - 29193.309: 98.3756% ( 2) 00:11:11.540 29193.309 - 29312.465: 98.4066% ( 2) 00:11:11.540 29312.465 - 29431.622: 98.4375% ( 2) 00:11:11.540 29431.622 - 29550.778: 98.4684% ( 2) 00:11:11.540 29550.778 - 29669.935: 98.4994% ( 2) 00:11:11.540 29669.935 - 29789.091: 98.5303% ( 2) 00:11:11.540 29789.091 - 29908.247: 98.5767% ( 3) 00:11:11.540 29908.247 - 30027.404: 98.6077% ( 2) 00:11:11.540 30027.404 - 30146.560: 98.6386% ( 2) 00:11:11.540 30146.560 - 30265.716: 98.6850% ( 3) 00:11:11.540 30265.716 - 30384.873: 98.7160% ( 2) 00:11:11.540 30384.873 - 30504.029: 98.7469% ( 2) 00:11:11.540 30504.029 - 30742.342: 98.8243% ( 5) 00:11:11.540 30742.342 - 30980.655: 98.8861% ( 4) 00:11:11.540 30980.655 - 31218.967: 98.9480% ( 4) 00:11:11.540 31218.967 - 31457.280: 99.0099% ( 4) 00:11:11.540 39559.913 - 39798.225: 99.0408% ( 2) 00:11:11.540 39798.225 - 40036.538: 99.1027% ( 4) 00:11:11.540 40036.538 - 40274.851: 99.1801% ( 5) 00:11:11.540 40274.851 - 40513.164: 99.2420% ( 4) 00:11:11.540 40513.164 - 40751.476: 99.3038% ( 4) 00:11:11.540 40751.476 - 40989.789: 99.3812% ( 5) 00:11:11.540 40989.789 - 41228.102: 99.4585% ( 5) 00:11:11.540 41228.102 - 41466.415: 99.5204% ( 4) 00:11:11.540 41466.415 - 41704.727: 99.5978% ( 5) 00:11:11.540 41704.727 - 41943.040: 99.6751% ( 5) 00:11:11.540 41943.040 - 42181.353: 99.7370% ( 4) 00:11:11.540 42181.353 - 42419.665: 99.7989% ( 4) 00:11:11.540 42419.665 - 42657.978: 99.8762% ( 5) 00:11:11.540 42657.978 - 42896.291: 99.9381% ( 4) 00:11:11.540 42896.291 - 43134.604: 100.0000% ( 4) 00:11:11.540 00:11:11.540 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:11.540 ============================================================================== 00:11:11.540 Range in us Cumulative IO count 00:11:11.540 17039.360 - 17158.516: 0.0309% ( 2) 00:11:11.540 17158.516 - 17277.673: 0.0464% ( 1) 00:11:11.540 17277.673 - 17396.829: 0.1238% ( 5) 00:11:11.540 17396.829 - 17515.985: 0.3094% ( 12) 00:11:11.540 17515.985 - 17635.142: 0.5569% ( 16) 00:11:11.541 17635.142 - 17754.298: 1.1603% ( 39) 00:11:11.541 17754.298 - 17873.455: 1.9028% ( 48) 00:11:11.541 17873.455 - 17992.611: 2.9548% ( 68) 00:11:11.541 17992.611 - 18111.767: 4.7803% ( 118) 00:11:11.541 18111.767 - 18230.924: 6.7141% ( 125) 00:11:11.541 18230.924 - 18350.080: 9.2822% ( 166) 00:11:11.541 18350.080 - 18469.236: 11.7110% ( 157) 00:11:11.541 18469.236 - 18588.393: 15.7952% ( 264) 00:11:11.541 18588.393 - 18707.549: 18.3942% ( 168) 00:11:11.541 18707.549 - 18826.705: 21.2562% ( 185) 00:11:11.541 18826.705 - 18945.862: 23.2364% ( 128) 00:11:11.541 18945.862 - 19065.018: 25.6498% ( 156) 00:11:11.541 19065.018 - 19184.175: 29.7803% ( 267) 00:11:11.541 19184.175 - 19303.331: 33.5396% ( 243) 00:11:11.541 19303.331 - 19422.487: 39.5421% ( 388) 00:11:11.541 19422.487 - 19541.644: 44.5854% ( 326) 00:11:11.541 19541.644 - 19660.800: 49.7679% ( 335) 00:11:11.541 19660.800 - 19779.956: 57.9363% ( 528) 00:11:11.541 19779.956 - 19899.113: 64.9598% ( 454) 00:11:11.541 19899.113 - 20018.269: 70.5910% ( 364) 00:11:11.541 20018.269 - 20137.425: 75.9282% ( 345) 00:11:11.541 20137.425 - 20256.582: 80.0433% ( 266) 00:11:11.541 20256.582 - 20375.738: 83.7871% ( 242) 00:11:11.541 20375.738 - 20494.895: 86.6182% ( 183) 00:11:11.541 20494.895 - 20614.051: 88.5210% ( 123) 00:11:11.541 20614.051 - 20733.207: 90.3465% ( 118) 00:11:11.541 20733.207 - 20852.364: 92.0792% ( 112) 00:11:11.541 20852.364 - 20971.520: 93.2859% ( 78) 00:11:11.541 20971.520 - 21090.676: 94.3843% ( 71) 00:11:11.541 21090.676 - 21209.833: 95.3280% ( 61) 00:11:11.541 21209.833 - 21328.989: 96.0087% ( 44) 00:11:11.541 21328.989 - 21448.145: 97.2308% ( 79) 00:11:11.541 21448.145 - 21567.302: 97.4783% ( 16) 00:11:11.541 21567.302 - 21686.458: 97.6176% ( 9) 00:11:11.541 21686.458 - 21805.615: 97.7104% ( 6) 00:11:11.541 21805.615 - 21924.771: 97.8187% ( 7) 00:11:11.541 21924.771 - 22043.927: 97.8806% ( 4) 00:11:11.541 22043.927 - 22163.084: 97.9734% ( 6) 00:11:11.541 22163.084 - 22282.240: 98.0198% ( 3) 00:11:11.541 26095.244 - 26214.400: 98.0507% ( 2) 00:11:11.541 26214.400 - 26333.556: 98.1126% ( 4) 00:11:11.541 26333.556 - 26452.713: 98.1590% ( 3) 00:11:11.541 26452.713 - 26571.869: 98.1900% ( 2) 00:11:11.541 26571.869 - 26691.025: 98.2828% ( 6) 00:11:11.541 26691.025 - 26810.182: 98.3447% ( 4) 00:11:11.541 26810.182 - 26929.338: 98.4066% ( 4) 00:11:11.541 26929.338 - 27048.495: 98.4684% ( 4) 00:11:11.541 27048.495 - 27167.651: 98.5149% ( 3) 00:11:11.541 27167.651 - 27286.807: 98.5458% ( 2) 00:11:11.541 27286.807 - 27405.964: 98.5767% ( 2) 00:11:11.541 27405.964 - 27525.120: 98.6077% ( 2) 00:11:11.541 27525.120 - 27644.276: 98.6386% ( 2) 00:11:11.541 27644.276 - 27763.433: 98.6696% ( 2) 00:11:11.541 27763.433 - 27882.589: 98.7005% ( 2) 00:11:11.541 27882.589 - 28001.745: 98.7314% ( 2) 00:11:11.541 28001.745 - 28120.902: 98.7624% ( 2) 00:11:11.541 28120.902 - 28240.058: 98.7933% ( 2) 00:11:11.541 28240.058 - 28359.215: 98.8243% ( 2) 00:11:11.541 28359.215 - 28478.371: 98.8552% ( 2) 00:11:11.541 28478.371 - 28597.527: 98.8861% ( 2) 00:11:11.541 28597.527 - 28716.684: 98.9171% ( 2) 00:11:11.541 28716.684 - 28835.840: 98.9325% ( 1) 00:11:11.541 28835.840 - 28954.996: 98.9480% ( 1) 00:11:11.541 28954.996 - 29074.153: 98.9944% ( 3) 00:11:11.541 29193.309 - 29312.465: 99.0099% ( 1) 00:11:11.541 35031.971 - 35270.284: 99.0254% ( 1) 00:11:11.541 35270.284 - 35508.596: 99.1027% ( 5) 00:11:11.541 35508.596 - 35746.909: 99.1491% ( 3) 00:11:11.541 37176.785 - 37415.098: 99.1646% ( 1) 00:11:11.541 37415.098 - 37653.411: 99.2265% ( 4) 00:11:11.541 37653.411 - 37891.724: 99.2729% ( 3) 00:11:11.541 37891.724 - 38130.036: 99.3348% ( 4) 00:11:11.541 38130.036 - 38368.349: 99.3967% ( 4) 00:11:11.541 38368.349 - 38606.662: 99.4585% ( 4) 00:11:11.541 38606.662 - 38844.975: 99.5204% ( 4) 00:11:11.541 38844.975 - 39083.287: 99.5823% ( 4) 00:11:11.541 39083.287 - 39321.600: 99.6442% ( 4) 00:11:11.541 39321.600 - 39559.913: 99.7061% ( 4) 00:11:11.541 39559.913 - 39798.225: 99.7679% ( 4) 00:11:11.541 39798.225 - 40036.538: 99.8453% ( 5) 00:11:11.541 40036.538 - 40274.851: 99.9072% ( 4) 00:11:11.541 40274.851 - 40513.164: 99.9845% ( 5) 00:11:11.541 40513.164 - 40751.476: 100.0000% ( 1) 00:11:11.541 00:11:11.541 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:11.541 ============================================================================== 00:11:11.541 Range in us Cumulative IO count 00:11:11.541 17039.360 - 17158.516: 0.0155% ( 1) 00:11:11.541 17158.516 - 17277.673: 0.0464% ( 2) 00:11:11.541 17277.673 - 17396.829: 0.2475% ( 13) 00:11:11.541 17396.829 - 17515.985: 0.4486% ( 13) 00:11:11.541 17515.985 - 17635.142: 0.6498% ( 13) 00:11:11.541 17635.142 - 17754.298: 1.1293% ( 31) 00:11:11.541 17754.298 - 17873.455: 1.7791% ( 42) 00:11:11.541 17873.455 - 17992.611: 2.8465% ( 69) 00:11:11.541 17992.611 - 18111.767: 4.6256% ( 115) 00:11:11.541 18111.767 - 18230.924: 6.1417% ( 98) 00:11:11.541 18230.924 - 18350.080: 9.4059% ( 211) 00:11:11.541 18350.080 - 18469.236: 11.7574% ( 152) 00:11:11.541 18469.236 - 18588.393: 16.7234% ( 321) 00:11:11.541 18588.393 - 18707.549: 18.7191% ( 129) 00:11:11.541 18707.549 - 18826.705: 21.3335% ( 169) 00:11:11.541 18826.705 - 18945.862: 23.2828% ( 126) 00:11:11.541 18945.862 - 19065.018: 25.9437% ( 172) 00:11:11.541 19065.018 - 19184.175: 28.9759% ( 196) 00:11:11.541 19184.175 - 19303.331: 32.7042% ( 241) 00:11:11.541 19303.331 - 19422.487: 38.0415% ( 345) 00:11:11.541 19422.487 - 19541.644: 44.8639% ( 441) 00:11:11.541 19541.644 - 19660.800: 50.7426% ( 380) 00:11:11.541 19660.800 - 19779.956: 56.6832% ( 384) 00:11:11.541 19779.956 - 19899.113: 65.5322% ( 572) 00:11:11.541 19899.113 - 20018.269: 71.1634% ( 364) 00:11:11.541 20018.269 - 20137.425: 76.5161% ( 346) 00:11:11.541 20137.425 - 20256.582: 80.6467% ( 267) 00:11:11.541 20256.582 - 20375.738: 83.9573% ( 214) 00:11:11.541 20375.738 - 20494.895: 86.2933% ( 151) 00:11:11.541 20494.895 - 20614.051: 88.2116% ( 124) 00:11:11.541 20614.051 - 20733.207: 90.0371% ( 118) 00:11:11.541 20733.207 - 20852.364: 91.5996% ( 101) 00:11:11.541 20852.364 - 20971.520: 92.9146% ( 85) 00:11:11.541 20971.520 - 21090.676: 93.9975% ( 70) 00:11:11.541 21090.676 - 21209.833: 95.2506% ( 81) 00:11:11.541 21209.833 - 21328.989: 95.8849% ( 41) 00:11:11.541 21328.989 - 21448.145: 96.8131% ( 60) 00:11:11.541 21448.145 - 21567.302: 97.3700% ( 36) 00:11:11.541 21567.302 - 21686.458: 97.5866% ( 14) 00:11:11.541 21686.458 - 21805.615: 97.6949% ( 7) 00:11:11.541 21805.615 - 21924.771: 97.8032% ( 7) 00:11:11.541 21924.771 - 22043.927: 97.8651% ( 4) 00:11:11.541 22043.927 - 22163.084: 97.9270% ( 4) 00:11:11.541 22163.084 - 22282.240: 97.9889% ( 4) 00:11:11.541 22282.240 - 22401.396: 98.0198% ( 2) 00:11:11.541 23354.647 - 23473.804: 98.0507% ( 2) 00:11:11.541 23473.804 - 23592.960: 98.1126% ( 4) 00:11:11.541 23592.960 - 23712.116: 98.1900% ( 5) 00:11:11.541 23712.116 - 23831.273: 98.2519% ( 4) 00:11:11.541 23831.273 - 23950.429: 98.3137% ( 4) 00:11:11.541 23950.429 - 24069.585: 98.3756% ( 4) 00:11:11.541 24069.585 - 24188.742: 98.4220% ( 3) 00:11:11.541 24188.742 - 24307.898: 98.4839% ( 4) 00:11:11.541 24307.898 - 24427.055: 98.5303% ( 3) 00:11:11.541 24427.055 - 24546.211: 98.5613% ( 2) 00:11:11.541 24546.211 - 24665.367: 98.5767% ( 1) 00:11:11.541 24665.367 - 24784.524: 98.6077% ( 2) 00:11:11.541 24784.524 - 24903.680: 98.6386% ( 2) 00:11:11.541 24903.680 - 25022.836: 98.6696% ( 2) 00:11:11.541 25022.836 - 25141.993: 98.7005% ( 2) 00:11:11.541 25141.993 - 25261.149: 98.7314% ( 2) 00:11:11.541 25261.149 - 25380.305: 98.7624% ( 2) 00:11:11.541 25380.305 - 25499.462: 98.7933% ( 2) 00:11:11.541 25499.462 - 25618.618: 98.8243% ( 2) 00:11:11.541 25618.618 - 25737.775: 98.8552% ( 2) 00:11:11.541 25737.775 - 25856.931: 98.8861% ( 2) 00:11:11.541 25856.931 - 25976.087: 98.9171% ( 2) 00:11:11.541 25976.087 - 26095.244: 98.9325% ( 1) 00:11:11.541 26095.244 - 26214.400: 98.9635% ( 2) 00:11:11.541 26214.400 - 26333.556: 98.9944% ( 2) 00:11:11.541 26333.556 - 26452.713: 99.0099% ( 1) 00:11:11.541 32410.531 - 32648.844: 99.0408% ( 2) 00:11:11.541 34317.033 - 34555.345: 99.0563% ( 1) 00:11:11.541 34555.345 - 34793.658: 99.1027% ( 3) 00:11:11.541 34793.658 - 35031.971: 99.1646% ( 4) 00:11:11.541 35031.971 - 35270.284: 99.2420% ( 5) 00:11:11.541 35270.284 - 35508.596: 99.3038% ( 4) 00:11:11.541 35508.596 - 35746.909: 99.3812% ( 5) 00:11:11.541 35746.909 - 35985.222: 99.4431% ( 4) 00:11:11.541 35985.222 - 36223.535: 99.5204% ( 5) 00:11:11.541 36223.535 - 36461.847: 99.5823% ( 4) 00:11:11.542 36461.847 - 36700.160: 99.6597% ( 5) 00:11:11.542 36700.160 - 36938.473: 99.7215% ( 4) 00:11:11.542 36938.473 - 37176.785: 99.7834% ( 4) 00:11:11.542 37176.785 - 37415.098: 99.8608% ( 5) 00:11:11.542 37415.098 - 37653.411: 99.9226% ( 4) 00:11:11.542 37653.411 - 37891.724: 99.9845% ( 4) 00:11:11.542 37891.724 - 38130.036: 100.0000% ( 1) 00:11:11.542 00:11:11.542 18:03:03 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:11.542 00:11:11.542 real 0m2.771s 00:11:11.542 user 0m2.359s 00:11:11.542 sys 0m0.312s 00:11:11.542 18:03:03 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.542 18:03:03 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:11.542 ************************************ 00:11:11.542 END TEST nvme_perf 00:11:11.542 ************************************ 00:11:11.542 18:03:03 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:11.542 18:03:03 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:11.542 18:03:03 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.542 18:03:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:11.542 ************************************ 00:11:11.542 START TEST nvme_hello_world 00:11:11.542 ************************************ 00:11:11.542 18:03:03 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:11.801 Initializing NVMe Controllers 00:11:11.801 Attached to 0000:00:10.0 00:11:11.801 Namespace ID: 1 size: 6GB 00:11:11.801 Attached to 0000:00:11.0 00:11:11.801 Namespace ID: 1 size: 5GB 00:11:11.801 Attached to 0000:00:13.0 00:11:11.801 Namespace ID: 1 size: 1GB 00:11:11.801 Attached to 0000:00:12.0 00:11:11.801 Namespace ID: 1 size: 4GB 00:11:11.801 Namespace ID: 2 size: 4GB 00:11:11.801 Namespace ID: 3 size: 4GB 00:11:11.801 Initialization complete. 00:11:11.801 INFO: using host memory buffer for IO 00:11:11.801 Hello world! 00:11:11.801 INFO: using host memory buffer for IO 00:11:11.801 Hello world! 00:11:11.801 INFO: using host memory buffer for IO 00:11:11.801 Hello world! 00:11:11.801 INFO: using host memory buffer for IO 00:11:11.801 Hello world! 00:11:11.801 INFO: using host memory buffer for IO 00:11:11.801 Hello world! 00:11:11.801 INFO: using host memory buffer for IO 00:11:11.801 Hello world! 00:11:11.801 00:11:11.801 real 0m0.311s 00:11:11.801 user 0m0.118s 00:11:11.801 sys 0m0.143s 00:11:11.801 18:03:04 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.801 18:03:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:11.801 ************************************ 00:11:11.801 END TEST nvme_hello_world 00:11:11.801 ************************************ 00:11:11.801 18:03:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:11.801 18:03:04 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:11.801 18:03:04 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.801 18:03:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:11.801 ************************************ 00:11:11.801 START TEST nvme_sgl 00:11:11.801 ************************************ 00:11:11.801 18:03:04 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:12.060 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:12.060 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:12.060 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:12.060 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:12.060 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:12.060 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:12.060 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:12.060 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:12.060 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:12.321 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:12.321 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:12.321 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:12.321 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:12.321 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:12.321 NVMe Readv/Writev Request test 00:11:12.321 Attached to 0000:00:10.0 00:11:12.321 Attached to 0000:00:11.0 00:11:12.321 Attached to 0000:00:13.0 00:11:12.321 Attached to 0000:00:12.0 00:11:12.321 0000:00:10.0: build_io_request_2 test passed 00:11:12.321 0000:00:10.0: build_io_request_4 test passed 00:11:12.321 0000:00:10.0: build_io_request_5 test passed 00:11:12.321 0000:00:10.0: build_io_request_6 test passed 00:11:12.321 0000:00:10.0: build_io_request_7 test passed 00:11:12.321 0000:00:10.0: build_io_request_10 test passed 00:11:12.321 0000:00:11.0: build_io_request_2 test passed 00:11:12.321 0000:00:11.0: build_io_request_4 test passed 00:11:12.321 0000:00:11.0: build_io_request_5 test passed 00:11:12.321 0000:00:11.0: build_io_request_6 test passed 00:11:12.321 0000:00:11.0: build_io_request_7 test passed 00:11:12.321 0000:00:11.0: build_io_request_10 test passed 00:11:12.321 Cleaning up... 00:11:12.321 00:11:12.321 real 0m0.355s 00:11:12.321 user 0m0.169s 00:11:12.321 sys 0m0.144s 00:11:12.321 18:03:04 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.321 18:03:04 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:12.321 ************************************ 00:11:12.321 END TEST nvme_sgl 00:11:12.321 ************************************ 00:11:12.321 18:03:04 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:12.321 18:03:04 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:12.321 18:03:04 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.321 18:03:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:12.321 ************************************ 00:11:12.321 START TEST nvme_e2edp 00:11:12.321 ************************************ 00:11:12.321 18:03:04 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:12.581 NVMe Write/Read with End-to-End data protection test 00:11:12.581 Attached to 0000:00:10.0 00:11:12.581 Attached to 0000:00:11.0 00:11:12.581 Attached to 0000:00:13.0 00:11:12.581 Attached to 0000:00:12.0 00:11:12.581 Cleaning up... 00:11:12.581 00:11:12.581 real 0m0.288s 00:11:12.581 user 0m0.098s 00:11:12.581 sys 0m0.146s 00:11:12.581 18:03:04 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.581 18:03:04 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:12.581 ************************************ 00:11:12.581 END TEST nvme_e2edp 00:11:12.581 ************************************ 00:11:12.581 18:03:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:12.581 18:03:05 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:12.581 18:03:05 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.581 18:03:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:12.581 ************************************ 00:11:12.581 START TEST nvme_reserve 00:11:12.581 ************************************ 00:11:12.581 18:03:05 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:12.840 ===================================================== 00:11:12.840 NVMe Controller at PCI bus 0, device 16, function 0 00:11:12.840 ===================================================== 00:11:12.840 Reservations: Not Supported 00:11:12.840 ===================================================== 00:11:12.840 NVMe Controller at PCI bus 0, device 17, function 0 00:11:12.840 ===================================================== 00:11:12.840 Reservations: Not Supported 00:11:12.840 ===================================================== 00:11:12.840 NVMe Controller at PCI bus 0, device 19, function 0 00:11:12.840 ===================================================== 00:11:12.840 Reservations: Not Supported 00:11:12.840 ===================================================== 00:11:12.840 NVMe Controller at PCI bus 0, device 18, function 0 00:11:12.840 ===================================================== 00:11:12.840 Reservations: Not Supported 00:11:12.840 Reservation test passed 00:11:12.840 00:11:12.840 real 0m0.304s 00:11:12.840 user 0m0.087s 00:11:12.840 sys 0m0.157s 00:11:12.840 18:03:05 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.840 18:03:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:12.840 ************************************ 00:11:12.840 END TEST nvme_reserve 00:11:12.840 ************************************ 00:11:13.098 18:03:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:13.098 18:03:05 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:13.098 18:03:05 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:13.098 18:03:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.098 ************************************ 00:11:13.098 START TEST nvme_err_injection 00:11:13.098 ************************************ 00:11:13.098 18:03:05 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:13.357 NVMe Error Injection test 00:11:13.357 Attached to 0000:00:10.0 00:11:13.357 Attached to 0000:00:11.0 00:11:13.357 Attached to 0000:00:13.0 00:11:13.357 Attached to 0000:00:12.0 00:11:13.357 0000:00:10.0: get features failed as expected 00:11:13.357 0000:00:11.0: get features failed as expected 00:11:13.357 0000:00:13.0: get features failed as expected 00:11:13.357 0000:00:12.0: get features failed as expected 00:11:13.357 0000:00:12.0: get features successfully as expected 00:11:13.357 0000:00:10.0: get features successfully as expected 00:11:13.357 0000:00:11.0: get features successfully as expected 00:11:13.357 0000:00:13.0: get features successfully as expected 00:11:13.357 0000:00:10.0: read failed as expected 00:11:13.357 0000:00:11.0: read failed as expected 00:11:13.357 0000:00:13.0: read failed as expected 00:11:13.357 0000:00:12.0: read failed as expected 00:11:13.357 0000:00:10.0: read successfully as expected 00:11:13.357 0000:00:11.0: read successfully as expected 00:11:13.357 0000:00:13.0: read successfully as expected 00:11:13.357 0000:00:12.0: read successfully as expected 00:11:13.357 Cleaning up... 00:11:13.357 00:11:13.357 real 0m0.319s 00:11:13.357 user 0m0.124s 00:11:13.357 sys 0m0.153s 00:11:13.357 18:03:05 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:13.357 18:03:05 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:13.357 ************************************ 00:11:13.357 END TEST nvme_err_injection 00:11:13.357 ************************************ 00:11:13.357 18:03:05 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:13.357 18:03:05 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:11:13.357 18:03:05 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:13.357 18:03:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.357 ************************************ 00:11:13.357 START TEST nvme_overhead 00:11:13.357 ************************************ 00:11:13.357 18:03:05 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:14.734 Initializing NVMe Controllers 00:11:14.734 Attached to 0000:00:10.0 00:11:14.734 Attached to 0000:00:11.0 00:11:14.734 Attached to 0000:00:13.0 00:11:14.734 Attached to 0000:00:12.0 00:11:14.734 Initialization complete. Launching workers. 00:11:14.734 submit (in ns) avg, min, max = 16110.5, 12628.2, 111912.7 00:11:14.734 complete (in ns) avg, min, max = 10764.4, 8527.3, 77894.5 00:11:14.734 00:11:14.734 Submit histogram 00:11:14.734 ================ 00:11:14.734 Range in us Cumulative Count 00:11:14.734 12.625 - 12.684: 0.0345% ( 3) 00:11:14.734 12.684 - 12.742: 0.1034% ( 6) 00:11:14.734 12.742 - 12.800: 0.1264% ( 2) 00:11:14.734 12.800 - 12.858: 0.1608% ( 3) 00:11:14.734 12.858 - 12.916: 0.1953% ( 3) 00:11:14.734 12.916 - 12.975: 0.2642% ( 6) 00:11:14.734 12.975 - 13.033: 0.3562% ( 8) 00:11:14.734 13.033 - 13.091: 0.5170% ( 14) 00:11:14.734 13.091 - 13.149: 0.6664% ( 13) 00:11:14.734 13.149 - 13.207: 0.8157% ( 13) 00:11:14.734 13.207 - 13.265: 1.0110% ( 17) 00:11:14.734 13.265 - 13.324: 1.3327% ( 28) 00:11:14.734 13.324 - 13.382: 1.5625% ( 20) 00:11:14.734 13.382 - 13.440: 1.7348% ( 15) 00:11:14.734 13.440 - 13.498: 2.0680% ( 29) 00:11:14.734 13.498 - 13.556: 2.3323% ( 23) 00:11:14.734 13.556 - 13.615: 2.6999% ( 32) 00:11:14.734 13.615 - 13.673: 3.2054% ( 44) 00:11:14.734 13.673 - 13.731: 3.8028% ( 52) 00:11:14.734 13.731 - 13.789: 4.4118% ( 53) 00:11:14.734 13.789 - 13.847: 4.8598% ( 39) 00:11:14.734 13.847 - 13.905: 5.4573% ( 52) 00:11:14.734 13.905 - 13.964: 5.9628% ( 44) 00:11:14.734 13.964 - 14.022: 6.4108% ( 39) 00:11:14.734 14.022 - 14.080: 6.9853% ( 50) 00:11:14.734 14.080 - 14.138: 7.6402% ( 57) 00:11:14.734 14.138 - 14.196: 8.3410% ( 61) 00:11:14.734 14.196 - 14.255: 9.4899% ( 100) 00:11:14.734 14.255 - 14.313: 10.4894% ( 87) 00:11:14.734 14.313 - 14.371: 12.2472% ( 153) 00:11:14.734 14.371 - 14.429: 13.7293% ( 129) 00:11:14.734 14.429 - 14.487: 16.1994% ( 215) 00:11:14.734 14.487 - 14.545: 20.0253% ( 333) 00:11:14.734 14.545 - 14.604: 25.3906% ( 467) 00:11:14.734 14.604 - 14.662: 33.0423% ( 666) 00:11:14.734 14.662 - 14.720: 40.6939% ( 666) 00:11:14.734 14.720 - 14.778: 48.0124% ( 637) 00:11:14.734 14.778 - 14.836: 53.5156% ( 479) 00:11:14.734 14.836 - 14.895: 57.8125% ( 374) 00:11:14.734 14.895 - 15.011: 63.5800% ( 502) 00:11:14.734 15.011 - 15.127: 67.1415% ( 310) 00:11:14.734 15.127 - 15.244: 69.5427% ( 209) 00:11:14.734 15.244 - 15.360: 70.9674% ( 124) 00:11:14.734 15.360 - 15.476: 72.1048% ( 99) 00:11:14.734 15.476 - 15.593: 72.8745% ( 67) 00:11:14.734 15.593 - 15.709: 73.4260% ( 48) 00:11:14.734 15.709 - 15.825: 73.7477% ( 28) 00:11:14.734 15.825 - 15.942: 74.1383% ( 34) 00:11:14.734 15.942 - 16.058: 74.3796% ( 21) 00:11:14.734 16.058 - 16.175: 74.5634% ( 16) 00:11:14.734 16.175 - 16.291: 74.7472% ( 16) 00:11:14.734 16.291 - 16.407: 75.0460% ( 26) 00:11:14.734 16.407 - 16.524: 75.9076% ( 75) 00:11:14.734 16.524 - 16.640: 77.4931% ( 138) 00:11:14.734 16.640 - 16.756: 79.8828% ( 208) 00:11:14.734 16.756 - 16.873: 82.0083% ( 185) 00:11:14.734 16.873 - 16.989: 83.5363% ( 133) 00:11:14.734 16.989 - 17.105: 84.5818% ( 91) 00:11:14.734 17.105 - 17.222: 85.2252% ( 56) 00:11:14.734 17.222 - 17.338: 85.8226% ( 52) 00:11:14.734 17.338 - 17.455: 86.1558% ( 29) 00:11:14.734 17.455 - 17.571: 86.3281% ( 15) 00:11:14.734 17.571 - 17.687: 86.6728% ( 30) 00:11:14.734 17.687 - 17.804: 87.1553% ( 42) 00:11:14.734 17.804 - 17.920: 87.6953% ( 47) 00:11:14.734 17.920 - 18.036: 88.3387% ( 56) 00:11:14.734 18.036 - 18.153: 88.7063% ( 32) 00:11:14.734 18.153 - 18.269: 89.0280% ( 28) 00:11:14.734 18.269 - 18.385: 89.2348% ( 18) 00:11:14.734 18.385 - 18.502: 89.4646% ( 20) 00:11:14.734 18.502 - 18.618: 89.6369% ( 15) 00:11:14.734 18.618 - 18.735: 89.7289% ( 8) 00:11:14.734 18.735 - 18.851: 89.7748% ( 4) 00:11:14.734 18.851 - 18.967: 89.8667% ( 8) 00:11:14.734 18.967 - 19.084: 89.9701% ( 9) 00:11:14.734 19.084 - 19.200: 90.0506% ( 7) 00:11:14.734 19.200 - 19.316: 90.0965% ( 4) 00:11:14.734 19.316 - 19.433: 90.1769% ( 7) 00:11:14.734 19.433 - 19.549: 90.2459% ( 6) 00:11:14.734 19.549 - 19.665: 90.3608% ( 10) 00:11:14.734 19.665 - 19.782: 90.4986% ( 12) 00:11:14.734 19.782 - 19.898: 90.6480% ( 13) 00:11:14.734 19.898 - 20.015: 90.8318% ( 16) 00:11:14.734 20.015 - 20.131: 90.9352% ( 9) 00:11:14.734 20.131 - 20.247: 91.0960% ( 14) 00:11:14.734 20.247 - 20.364: 91.2684% ( 15) 00:11:14.734 20.364 - 20.480: 91.3603% ( 8) 00:11:14.734 20.480 - 20.596: 91.5441% ( 16) 00:11:14.734 20.596 - 20.713: 91.6705% ( 11) 00:11:14.734 20.713 - 20.829: 91.8313% ( 14) 00:11:14.734 20.829 - 20.945: 92.0381% ( 18) 00:11:14.734 20.945 - 21.062: 92.1760% ( 12) 00:11:14.734 21.062 - 21.178: 92.2679% ( 8) 00:11:14.734 21.178 - 21.295: 92.4747% ( 18) 00:11:14.734 21.295 - 21.411: 92.6471% ( 15) 00:11:14.734 21.411 - 21.527: 92.7505% ( 9) 00:11:14.734 21.527 - 21.644: 92.8998% ( 13) 00:11:14.734 21.644 - 21.760: 93.0377% ( 12) 00:11:14.734 21.760 - 21.876: 93.1985% ( 14) 00:11:14.734 21.876 - 21.993: 93.3249% ( 11) 00:11:14.734 21.993 - 22.109: 93.4398% ( 10) 00:11:14.734 22.109 - 22.225: 93.5547% ( 10) 00:11:14.734 22.225 - 22.342: 93.6351% ( 7) 00:11:14.734 22.342 - 22.458: 93.7270% ( 8) 00:11:14.734 22.458 - 22.575: 93.8419% ( 10) 00:11:14.734 22.575 - 22.691: 93.9223% ( 7) 00:11:14.734 22.691 - 22.807: 94.0142% ( 8) 00:11:14.734 22.807 - 22.924: 94.0832% ( 6) 00:11:14.734 22.924 - 23.040: 94.2670% ( 16) 00:11:14.734 23.040 - 23.156: 94.3819% ( 10) 00:11:14.734 23.156 - 23.273: 94.4623% ( 7) 00:11:14.734 23.273 - 23.389: 94.6002% ( 12) 00:11:14.734 23.389 - 23.505: 94.6921% ( 8) 00:11:14.734 23.505 - 23.622: 94.8759% ( 16) 00:11:14.734 23.622 - 23.738: 94.9678% ( 8) 00:11:14.734 23.738 - 23.855: 95.0827% ( 10) 00:11:14.734 23.855 - 23.971: 95.2321% ( 13) 00:11:14.734 23.971 - 24.087: 95.3585% ( 11) 00:11:14.734 24.087 - 24.204: 95.4274% ( 6) 00:11:14.734 24.204 - 24.320: 95.5193% ( 8) 00:11:14.734 24.320 - 24.436: 95.6112% ( 8) 00:11:14.734 24.436 - 24.553: 95.6572% ( 4) 00:11:14.734 24.553 - 24.669: 95.7261% ( 6) 00:11:14.734 24.669 - 24.785: 95.8180% ( 8) 00:11:14.734 24.785 - 24.902: 95.8984% ( 7) 00:11:14.734 24.902 - 25.018: 95.9214% ( 2) 00:11:14.734 25.018 - 25.135: 95.9559% ( 3) 00:11:14.734 25.135 - 25.251: 96.0018% ( 4) 00:11:14.734 25.251 - 25.367: 96.0708% ( 6) 00:11:14.734 25.367 - 25.484: 96.1167% ( 4) 00:11:14.734 25.484 - 25.600: 96.1972% ( 7) 00:11:14.734 25.600 - 25.716: 96.2776% ( 7) 00:11:14.734 25.716 - 25.833: 96.3235% ( 4) 00:11:14.734 25.833 - 25.949: 96.3695% ( 4) 00:11:14.734 25.949 - 26.065: 96.4154% ( 4) 00:11:14.734 26.065 - 26.182: 96.4959% ( 7) 00:11:14.734 26.182 - 26.298: 96.5418% ( 4) 00:11:14.734 26.298 - 26.415: 96.5648% ( 2) 00:11:14.734 26.415 - 26.531: 96.6222% ( 5) 00:11:14.734 26.531 - 26.647: 96.6452% ( 2) 00:11:14.734 26.647 - 26.764: 96.7027% ( 5) 00:11:14.734 26.764 - 26.880: 96.7371% ( 3) 00:11:14.734 26.880 - 26.996: 96.7831% ( 4) 00:11:14.734 26.996 - 27.113: 96.8750% ( 8) 00:11:14.734 27.113 - 27.229: 96.9095% ( 3) 00:11:14.734 27.229 - 27.345: 96.9210% ( 1) 00:11:14.734 27.462 - 27.578: 96.9669% ( 4) 00:11:14.734 27.578 - 27.695: 97.0014% ( 3) 00:11:14.734 27.695 - 27.811: 97.0129% ( 1) 00:11:14.734 27.811 - 27.927: 97.0818% ( 6) 00:11:14.734 27.927 - 28.044: 97.1622% ( 7) 00:11:14.734 28.044 - 28.160: 97.2082% ( 4) 00:11:14.734 28.160 - 28.276: 97.2312% ( 2) 00:11:14.734 28.276 - 28.393: 97.2656% ( 3) 00:11:14.734 28.393 - 28.509: 97.3575% ( 8) 00:11:14.734 28.509 - 28.625: 97.4380% ( 7) 00:11:14.734 28.625 - 28.742: 97.4839% ( 4) 00:11:14.734 28.742 - 28.858: 97.5414% ( 5) 00:11:14.734 28.858 - 28.975: 97.6218% ( 7) 00:11:14.734 28.975 - 29.091: 97.7022% ( 7) 00:11:14.734 29.091 - 29.207: 97.8401% ( 12) 00:11:14.734 29.207 - 29.324: 97.9550% ( 10) 00:11:14.734 29.324 - 29.440: 98.0928% ( 12) 00:11:14.734 29.440 - 29.556: 98.2307% ( 12) 00:11:14.734 29.556 - 29.673: 98.3226% ( 8) 00:11:14.734 29.673 - 29.789: 98.5064% ( 16) 00:11:14.734 29.789 - 30.022: 98.8051% ( 26) 00:11:14.734 30.022 - 30.255: 99.1039% ( 26) 00:11:14.734 30.255 - 30.487: 99.2073% ( 9) 00:11:14.734 30.487 - 30.720: 99.2762% ( 6) 00:11:14.734 30.720 - 30.953: 99.3222% ( 4) 00:11:14.734 30.953 - 31.185: 99.3681% ( 4) 00:11:14.734 31.185 - 31.418: 99.3911% ( 2) 00:11:14.734 31.418 - 31.651: 99.4141% ( 2) 00:11:14.734 31.651 - 31.884: 99.4256% ( 1) 00:11:14.734 31.884 - 32.116: 99.4370% ( 1) 00:11:14.734 32.116 - 32.349: 99.4485% ( 1) 00:11:14.735 32.815 - 33.047: 99.4715% ( 2) 00:11:14.735 33.047 - 33.280: 99.4830% ( 1) 00:11:14.735 33.280 - 33.513: 99.5060% ( 2) 00:11:14.735 34.211 - 34.444: 99.5175% ( 1) 00:11:14.735 34.444 - 34.676: 99.5290% ( 1) 00:11:14.735 35.142 - 35.375: 99.5519% ( 2) 00:11:14.735 35.375 - 35.607: 99.5749% ( 2) 00:11:14.735 35.607 - 35.840: 99.5864% ( 1) 00:11:14.735 35.840 - 36.073: 99.6209% ( 3) 00:11:14.735 36.073 - 36.305: 99.6438% ( 2) 00:11:14.735 36.305 - 36.538: 99.6553% ( 1) 00:11:14.735 36.538 - 36.771: 99.6898% ( 3) 00:11:14.735 36.771 - 37.004: 99.7128% ( 2) 00:11:14.735 37.004 - 37.236: 99.7358% ( 2) 00:11:14.735 37.236 - 37.469: 99.7472% ( 1) 00:11:14.735 37.935 - 38.167: 99.7587% ( 1) 00:11:14.735 38.167 - 38.400: 99.7702% ( 1) 00:11:14.735 38.400 - 38.633: 99.7817% ( 1) 00:11:14.735 38.865 - 39.098: 99.8047% ( 2) 00:11:14.735 39.098 - 39.331: 99.8277% ( 2) 00:11:14.735 40.029 - 40.262: 99.8506% ( 2) 00:11:14.735 40.727 - 40.960: 99.8736% ( 2) 00:11:14.735 42.356 - 42.589: 99.8851% ( 1) 00:11:14.735 44.684 - 44.916: 99.8966% ( 1) 00:11:14.735 47.476 - 47.709: 99.9081% ( 1) 00:11:14.735 54.225 - 54.458: 99.9196% ( 1) 00:11:14.735 54.924 - 55.156: 99.9311% ( 1) 00:11:14.735 61.440 - 61.905: 99.9426% ( 1) 00:11:14.735 62.371 - 62.836: 99.9540% ( 1) 00:11:14.735 68.422 - 68.887: 99.9655% ( 1) 00:11:14.735 78.662 - 79.127: 99.9770% ( 1) 00:11:14.735 79.127 - 79.593: 99.9885% ( 1) 00:11:14.735 111.709 - 112.175: 100.0000% ( 1) 00:11:14.735 00:11:14.735 Complete histogram 00:11:14.735 ================== 00:11:14.735 Range in us Cumulative Count 00:11:14.735 8.495 - 8.553: 0.0230% ( 2) 00:11:14.735 8.553 - 8.611: 0.0689% ( 4) 00:11:14.735 8.611 - 8.669: 0.2872% ( 19) 00:11:14.735 8.669 - 8.727: 0.5630% ( 24) 00:11:14.735 8.727 - 8.785: 0.8961% ( 29) 00:11:14.735 8.785 - 8.844: 1.3097% ( 36) 00:11:14.735 8.844 - 8.902: 1.9991% ( 60) 00:11:14.735 8.902 - 8.960: 3.2399% ( 108) 00:11:14.735 8.960 - 9.018: 4.5841% ( 117) 00:11:14.735 9.018 - 9.076: 6.0432% ( 127) 00:11:14.735 9.076 - 9.135: 9.2142% ( 276) 00:11:14.735 9.135 - 9.193: 16.5211% ( 636) 00:11:14.735 9.193 - 9.251: 27.6080% ( 965) 00:11:14.735 9.251 - 9.309: 39.8323% ( 1064) 00:11:14.735 9.309 - 9.367: 48.6673% ( 769) 00:11:14.735 9.367 - 9.425: 54.0671% ( 470) 00:11:14.735 9.425 - 9.484: 57.2725% ( 279) 00:11:14.735 9.484 - 9.542: 59.7082% ( 212) 00:11:14.735 9.542 - 9.600: 62.2702% ( 223) 00:11:14.735 9.600 - 9.658: 64.3957% ( 185) 00:11:14.735 9.658 - 9.716: 65.7973% ( 122) 00:11:14.735 9.716 - 9.775: 66.7739% ( 85) 00:11:14.735 9.775 - 9.833: 67.5896% ( 71) 00:11:14.735 9.833 - 9.891: 68.2560% ( 58) 00:11:14.735 9.891 - 9.949: 68.9798% ( 63) 00:11:14.735 9.949 - 10.007: 69.8989% ( 80) 00:11:14.735 10.007 - 10.065: 70.7146% ( 71) 00:11:14.735 10.065 - 10.124: 71.5993% ( 77) 00:11:14.735 10.124 - 10.182: 72.1852% ( 51) 00:11:14.735 10.182 - 10.240: 72.8745% ( 60) 00:11:14.735 10.240 - 10.298: 73.3571% ( 42) 00:11:14.735 10.298 - 10.356: 73.6558% ( 26) 00:11:14.735 10.356 - 10.415: 73.8971% ( 21) 00:11:14.735 10.415 - 10.473: 74.1153% ( 19) 00:11:14.735 10.473 - 10.531: 74.3222% ( 18) 00:11:14.735 10.531 - 10.589: 74.4485% ( 11) 00:11:14.735 10.589 - 10.647: 74.6553% ( 18) 00:11:14.735 10.647 - 10.705: 74.7932% ( 12) 00:11:14.735 10.705 - 10.764: 74.9885% ( 17) 00:11:14.735 10.764 - 10.822: 75.1264% ( 12) 00:11:14.735 10.822 - 10.880: 75.3102% ( 16) 00:11:14.735 10.880 - 10.938: 75.3791% ( 6) 00:11:14.735 10.938 - 10.996: 75.4596% ( 7) 00:11:14.735 10.996 - 11.055: 75.5400% ( 7) 00:11:14.735 11.055 - 11.113: 75.6089% ( 6) 00:11:14.735 11.113 - 11.171: 75.6549% ( 4) 00:11:14.735 11.171 - 11.229: 75.7468% ( 8) 00:11:14.735 11.229 - 11.287: 75.8272% ( 7) 00:11:14.735 11.287 - 11.345: 75.8847% ( 5) 00:11:14.735 11.345 - 11.404: 75.9995% ( 10) 00:11:14.735 11.404 - 11.462: 76.2293% ( 20) 00:11:14.735 11.462 - 11.520: 76.5625% ( 29) 00:11:14.735 11.520 - 11.578: 76.8957% ( 29) 00:11:14.735 11.578 - 11.636: 77.3552% ( 40) 00:11:14.735 11.636 - 11.695: 77.8378% ( 42) 00:11:14.735 11.695 - 11.753: 78.2973% ( 40) 00:11:14.735 11.753 - 11.811: 78.9177% ( 54) 00:11:14.735 11.811 - 11.869: 79.9403% ( 89) 00:11:14.735 11.869 - 11.927: 81.0432% ( 96) 00:11:14.735 11.927 - 11.985: 82.5483% ( 131) 00:11:14.735 11.985 - 12.044: 83.7891% ( 108) 00:11:14.735 12.044 - 12.102: 84.9150% ( 98) 00:11:14.735 12.102 - 12.160: 85.9260% ( 88) 00:11:14.735 12.160 - 12.218: 86.7532% ( 72) 00:11:14.735 12.218 - 12.276: 87.3277% ( 50) 00:11:14.735 12.276 - 12.335: 87.7298% ( 35) 00:11:14.735 12.335 - 12.393: 88.0859% ( 31) 00:11:14.735 12.393 - 12.451: 88.3961% ( 27) 00:11:14.735 12.451 - 12.509: 88.6604% ( 23) 00:11:14.735 12.509 - 12.567: 88.7753% ( 10) 00:11:14.735 12.567 - 12.625: 88.8902% ( 10) 00:11:14.735 12.625 - 12.684: 89.0395% ( 13) 00:11:14.735 12.684 - 12.742: 89.2004% ( 14) 00:11:14.735 12.742 - 12.800: 89.3842% ( 16) 00:11:14.735 12.800 - 12.858: 89.6369% ( 22) 00:11:14.735 12.858 - 12.916: 89.9701% ( 29) 00:11:14.735 12.916 - 12.975: 90.2574% ( 25) 00:11:14.735 12.975 - 13.033: 90.5331% ( 24) 00:11:14.735 13.033 - 13.091: 90.6824% ( 13) 00:11:14.735 13.091 - 13.149: 90.8663% ( 16) 00:11:14.735 13.149 - 13.207: 90.9926% ( 11) 00:11:14.735 13.207 - 13.265: 91.0960% ( 9) 00:11:14.735 13.265 - 13.324: 91.1650% ( 6) 00:11:14.735 13.324 - 13.382: 91.2914% ( 11) 00:11:14.735 13.382 - 13.440: 91.3373% ( 4) 00:11:14.735 13.440 - 13.498: 91.3948% ( 5) 00:11:14.735 13.498 - 13.556: 91.4982% ( 9) 00:11:14.735 13.556 - 13.615: 91.5786% ( 7) 00:11:14.735 13.615 - 13.673: 91.6820% ( 9) 00:11:14.735 13.673 - 13.731: 91.7165% ( 3) 00:11:14.735 13.731 - 13.789: 91.8313% ( 10) 00:11:14.735 13.789 - 13.847: 91.8543% ( 2) 00:11:14.735 13.847 - 13.905: 91.9233% ( 6) 00:11:14.735 13.905 - 13.964: 91.9807% ( 5) 00:11:14.735 13.964 - 14.022: 92.0267% ( 4) 00:11:14.735 14.022 - 14.080: 92.1301% ( 9) 00:11:14.735 14.080 - 14.138: 92.1645% ( 3) 00:11:14.735 14.255 - 14.313: 92.2220% ( 5) 00:11:14.735 14.313 - 14.371: 92.2564% ( 3) 00:11:14.735 14.371 - 14.429: 92.2679% ( 1) 00:11:14.735 14.429 - 14.487: 92.3024% ( 3) 00:11:14.735 14.487 - 14.545: 92.3254% ( 2) 00:11:14.735 14.545 - 14.604: 92.3828% ( 5) 00:11:14.735 14.604 - 14.662: 92.4058% ( 2) 00:11:14.735 14.662 - 14.720: 92.4632% ( 5) 00:11:14.735 14.720 - 14.778: 92.5092% ( 4) 00:11:14.735 14.778 - 14.836: 92.5437% ( 3) 00:11:14.735 14.836 - 14.895: 92.5896% ( 4) 00:11:14.735 14.895 - 15.011: 92.6700% ( 7) 00:11:14.735 15.011 - 15.127: 92.7505% ( 7) 00:11:14.735 15.127 - 15.244: 92.8768% ( 11) 00:11:14.735 15.244 - 15.360: 92.9573% ( 7) 00:11:14.735 15.360 - 15.476: 93.0836% ( 11) 00:11:14.735 15.476 - 15.593: 93.1641% ( 7) 00:11:14.735 15.593 - 15.709: 93.3019% ( 12) 00:11:14.735 15.709 - 15.825: 93.4743% ( 15) 00:11:14.735 15.825 - 15.942: 93.5662% ( 8) 00:11:14.735 15.942 - 16.058: 93.6006% ( 3) 00:11:14.735 16.058 - 16.175: 93.7500% ( 13) 00:11:14.735 16.175 - 16.291: 93.8649% ( 10) 00:11:14.735 16.291 - 16.407: 93.9683% ( 9) 00:11:14.735 16.407 - 16.524: 94.0717% ( 9) 00:11:14.735 16.524 - 16.640: 94.1521% ( 7) 00:11:14.735 16.640 - 16.756: 94.2555% ( 9) 00:11:14.735 16.756 - 16.873: 94.4049% ( 13) 00:11:14.735 16.873 - 16.989: 94.4968% ( 8) 00:11:14.735 16.989 - 17.105: 94.6117% ( 10) 00:11:14.735 17.105 - 17.222: 94.7036% ( 8) 00:11:14.735 17.222 - 17.338: 94.8185% ( 10) 00:11:14.735 17.338 - 17.455: 94.9104% ( 8) 00:11:14.735 17.455 - 17.571: 94.9793% ( 6) 00:11:14.735 17.571 - 17.687: 95.0368% ( 5) 00:11:14.735 17.687 - 17.804: 95.1287% ( 8) 00:11:14.735 17.804 - 17.920: 95.2091% ( 7) 00:11:14.735 17.920 - 18.036: 95.2895% ( 7) 00:11:14.735 18.036 - 18.153: 95.3699% ( 7) 00:11:14.735 18.153 - 18.269: 95.4274% ( 5) 00:11:14.735 18.269 - 18.385: 95.5078% ( 7) 00:11:14.735 18.385 - 18.502: 95.5882% ( 7) 00:11:14.735 18.502 - 18.618: 95.6801% ( 8) 00:11:14.735 18.618 - 18.735: 95.7491% ( 6) 00:11:14.735 18.735 - 18.851: 95.7950% ( 4) 00:11:14.735 18.851 - 18.967: 95.8525% ( 5) 00:11:14.735 18.967 - 19.084: 95.9214% ( 6) 00:11:14.735 19.084 - 19.200: 95.9789% ( 5) 00:11:14.735 19.200 - 19.316: 96.0363% ( 5) 00:11:14.735 19.316 - 19.433: 96.0708% ( 3) 00:11:14.735 19.433 - 19.549: 96.1397% ( 6) 00:11:14.735 19.549 - 19.665: 96.1857% ( 4) 00:11:14.735 19.665 - 19.782: 96.2546% ( 6) 00:11:14.735 19.782 - 19.898: 96.2661% ( 1) 00:11:14.735 19.898 - 20.015: 96.3006% ( 3) 00:11:14.735 20.015 - 20.131: 96.3465% ( 4) 00:11:14.735 20.131 - 20.247: 96.3925% ( 4) 00:11:14.735 20.247 - 20.364: 96.4499% ( 5) 00:11:14.735 20.364 - 20.480: 96.4729% ( 2) 00:11:14.735 20.480 - 20.596: 96.4844% ( 1) 00:11:14.736 20.596 - 20.713: 96.4959% ( 1) 00:11:14.736 20.713 - 20.829: 96.5533% ( 5) 00:11:14.736 20.829 - 20.945: 96.5763% ( 2) 00:11:14.736 20.945 - 21.062: 96.6222% ( 4) 00:11:14.736 21.062 - 21.178: 96.6567% ( 3) 00:11:14.736 21.178 - 21.295: 96.7027% ( 4) 00:11:14.736 21.295 - 21.411: 96.7486% ( 4) 00:11:14.736 21.411 - 21.527: 96.7946% ( 4) 00:11:14.736 21.644 - 21.760: 96.8061% ( 1) 00:11:14.736 21.760 - 21.876: 96.8176% ( 1) 00:11:14.736 21.876 - 21.993: 96.8290% ( 1) 00:11:14.736 22.109 - 22.225: 96.8405% ( 1) 00:11:14.736 22.225 - 22.342: 96.8520% ( 1) 00:11:14.736 22.458 - 22.575: 96.8865% ( 3) 00:11:14.736 22.575 - 22.691: 96.9095% ( 2) 00:11:14.736 22.807 - 22.924: 96.9210% ( 1) 00:11:14.736 22.924 - 23.040: 96.9324% ( 1) 00:11:14.736 23.156 - 23.273: 96.9554% ( 2) 00:11:14.736 23.273 - 23.389: 96.9669% ( 1) 00:11:14.736 23.389 - 23.505: 96.9784% ( 1) 00:11:14.736 23.505 - 23.622: 97.0014% ( 2) 00:11:14.736 23.622 - 23.738: 97.0473% ( 4) 00:11:14.736 23.738 - 23.855: 97.1622% ( 10) 00:11:14.736 23.855 - 23.971: 97.2886% ( 11) 00:11:14.736 23.971 - 24.087: 97.4150% ( 11) 00:11:14.736 24.087 - 24.204: 97.5414% ( 11) 00:11:14.736 24.204 - 24.320: 97.7252% ( 16) 00:11:14.736 24.320 - 24.436: 98.0469% ( 28) 00:11:14.736 24.436 - 24.553: 98.2767% ( 20) 00:11:14.736 24.553 - 24.669: 98.5409% ( 23) 00:11:14.736 24.669 - 24.785: 98.7592% ( 19) 00:11:14.736 24.785 - 24.902: 98.8971% ( 12) 00:11:14.736 24.902 - 25.018: 99.0005% ( 9) 00:11:14.736 25.018 - 25.135: 99.1039% ( 9) 00:11:14.736 25.135 - 25.251: 99.1613% ( 5) 00:11:14.736 25.251 - 25.367: 99.1728% ( 1) 00:11:14.736 25.367 - 25.484: 99.2073% ( 3) 00:11:14.736 25.600 - 25.716: 99.2647% ( 5) 00:11:14.736 25.716 - 25.833: 99.2992% ( 3) 00:11:14.736 25.949 - 26.065: 99.3107% ( 1) 00:11:14.736 26.298 - 26.415: 99.3222% ( 1) 00:11:14.736 26.531 - 26.647: 99.3336% ( 1) 00:11:14.736 26.647 - 26.764: 99.3451% ( 1) 00:11:14.736 26.764 - 26.880: 99.3566% ( 1) 00:11:14.736 26.996 - 27.113: 99.3796% ( 2) 00:11:14.736 27.113 - 27.229: 99.3911% ( 1) 00:11:14.736 27.345 - 27.462: 99.4026% ( 1) 00:11:14.736 27.811 - 27.927: 99.4141% ( 1) 00:11:14.736 28.393 - 28.509: 99.4256% ( 1) 00:11:14.736 28.625 - 28.742: 99.4370% ( 1) 00:11:14.736 28.742 - 28.858: 99.4485% ( 1) 00:11:14.736 28.858 - 28.975: 99.4600% ( 1) 00:11:14.736 29.673 - 29.789: 99.4715% ( 1) 00:11:14.736 30.022 - 30.255: 99.4830% ( 1) 00:11:14.736 30.255 - 30.487: 99.4945% ( 1) 00:11:14.736 30.487 - 30.720: 99.5404% ( 4) 00:11:14.736 30.720 - 30.953: 99.5979% ( 5) 00:11:14.736 30.953 - 31.185: 99.6209% ( 2) 00:11:14.736 31.185 - 31.418: 99.6553% ( 3) 00:11:14.736 31.418 - 31.651: 99.6783% ( 2) 00:11:14.736 31.651 - 31.884: 99.7243% ( 4) 00:11:14.736 31.884 - 32.116: 99.7472% ( 2) 00:11:14.736 32.349 - 32.582: 99.7817% ( 3) 00:11:14.736 32.582 - 32.815: 99.8047% ( 2) 00:11:14.736 32.815 - 33.047: 99.8162% ( 1) 00:11:14.736 33.280 - 33.513: 99.8277% ( 1) 00:11:14.736 33.978 - 34.211: 99.8392% ( 1) 00:11:14.736 34.909 - 35.142: 99.8506% ( 1) 00:11:14.736 35.142 - 35.375: 99.8621% ( 1) 00:11:14.736 35.375 - 35.607: 99.8736% ( 1) 00:11:14.736 36.073 - 36.305: 99.8851% ( 1) 00:11:14.736 37.469 - 37.702: 99.8966% ( 1) 00:11:14.736 37.702 - 37.935: 99.9081% ( 1) 00:11:14.736 40.029 - 40.262: 99.9196% ( 1) 00:11:14.736 43.287 - 43.520: 99.9311% ( 1) 00:11:14.736 46.080 - 46.313: 99.9426% ( 1) 00:11:14.736 46.313 - 46.545: 99.9540% ( 1) 00:11:14.736 47.709 - 47.942: 99.9655% ( 1) 00:11:14.736 49.804 - 50.036: 99.9770% ( 1) 00:11:14.736 51.665 - 51.898: 99.9885% ( 1) 00:11:14.736 77.731 - 78.196: 100.0000% ( 1) 00:11:14.736 00:11:14.736 00:11:14.736 real 0m1.303s 00:11:14.736 user 0m1.105s 00:11:14.736 sys 0m0.143s 00:11:14.736 18:03:07 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.736 18:03:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 ************************************ 00:11:14.736 END TEST nvme_overhead 00:11:14.736 ************************************ 00:11:14.736 18:03:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:14.736 18:03:07 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:14.736 18:03:07 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.736 18:03:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 ************************************ 00:11:14.736 START TEST nvme_arbitration 00:11:14.736 ************************************ 00:11:14.736 18:03:07 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:18.022 Initializing NVMe Controllers 00:11:18.022 Attached to 0000:00:10.0 00:11:18.022 Attached to 0000:00:11.0 00:11:18.022 Attached to 0000:00:13.0 00:11:18.022 Attached to 0000:00:12.0 00:11:18.022 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:18.022 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:18.022 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:18.022 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:18.022 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:18.022 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:18.022 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:18.022 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:18.022 Initialization complete. Launching workers. 00:11:18.022 Starting thread on core 1 with urgent priority queue 00:11:18.022 Starting thread on core 2 with urgent priority queue 00:11:18.022 Starting thread on core 3 with urgent priority queue 00:11:18.022 Starting thread on core 0 with urgent priority queue 00:11:18.022 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:11:18.022 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:11:18.022 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:18.022 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:18.022 QEMU NVMe Ctrl (12343 ) core 2: 832.00 IO/s 120.19 secs/100000 ios 00:11:18.022 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:11:18.022 ======================================================== 00:11:18.022 00:11:18.281 ************************************ 00:11:18.281 END TEST nvme_arbitration 00:11:18.281 ************************************ 00:11:18.281 00:11:18.281 real 0m3.424s 00:11:18.281 user 0m9.374s 00:11:18.281 sys 0m0.166s 00:11:18.281 18:03:10 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:18.281 18:03:10 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:18.281 18:03:10 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:18.281 18:03:10 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:18.281 18:03:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:18.281 18:03:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:18.281 ************************************ 00:11:18.281 START TEST nvme_single_aen 00:11:18.281 ************************************ 00:11:18.281 18:03:10 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:18.540 Asynchronous Event Request test 00:11:18.540 Attached to 0000:00:10.0 00:11:18.540 Attached to 0000:00:11.0 00:11:18.540 Attached to 0000:00:13.0 00:11:18.540 Attached to 0000:00:12.0 00:11:18.540 Reset controller to setup AER completions for this process 00:11:18.540 Registering asynchronous event callbacks... 00:11:18.540 Getting orig temperature thresholds of all controllers 00:11:18.540 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:18.540 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:18.540 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:18.540 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:18.540 Setting all controllers temperature threshold low to trigger AER 00:11:18.540 Waiting for all controllers temperature threshold to be set lower 00:11:18.540 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:18.540 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:18.540 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:18.540 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:18.540 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:18.540 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:18.540 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:18.540 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:18.540 Waiting for all controllers to trigger AER and reset threshold 00:11:18.540 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.540 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.540 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.540 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.540 Cleaning up... 00:11:18.540 ************************************ 00:11:18.540 END TEST nvme_single_aen 00:11:18.540 ************************************ 00:11:18.540 00:11:18.540 real 0m0.269s 00:11:18.540 user 0m0.109s 00:11:18.540 sys 0m0.115s 00:11:18.540 18:03:10 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:18.540 18:03:10 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:18.540 18:03:10 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:18.540 18:03:10 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:18.540 18:03:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:18.540 18:03:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:18.540 ************************************ 00:11:18.540 START TEST nvme_doorbell_aers 00:11:18.540 ************************************ 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:18.540 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:18.541 18:03:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:18.541 18:03:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:18.541 18:03:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:18.800 [2024-05-15 18:03:11.265747] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:28.795 Executing: test_write_invalid_db 00:11:28.795 Waiting for AER completion... 00:11:28.795 Failure: test_write_invalid_db 00:11:28.795 00:11:28.795 Executing: test_invalid_db_write_overflow_sq 00:11:28.795 Waiting for AER completion... 00:11:28.795 Failure: test_invalid_db_write_overflow_sq 00:11:28.795 00:11:28.795 Executing: test_invalid_db_write_overflow_cq 00:11:28.795 Waiting for AER completion... 00:11:28.795 Failure: test_invalid_db_write_overflow_cq 00:11:28.795 00:11:28.795 18:03:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:28.795 18:03:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:29.052 [2024-05-15 18:03:21.314993] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:39.168 Executing: test_write_invalid_db 00:11:39.168 Waiting for AER completion... 00:11:39.168 Failure: test_write_invalid_db 00:11:39.168 00:11:39.168 Executing: test_invalid_db_write_overflow_sq 00:11:39.168 Waiting for AER completion... 00:11:39.168 Failure: test_invalid_db_write_overflow_sq 00:11:39.168 00:11:39.168 Executing: test_invalid_db_write_overflow_cq 00:11:39.168 Waiting for AER completion... 00:11:39.168 Failure: test_invalid_db_write_overflow_cq 00:11:39.168 00:11:39.168 18:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:39.168 18:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:39.168 [2024-05-15 18:03:31.329071] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:49.147 Executing: test_write_invalid_db 00:11:49.147 Waiting for AER completion... 00:11:49.147 Failure: test_write_invalid_db 00:11:49.147 00:11:49.147 Executing: test_invalid_db_write_overflow_sq 00:11:49.147 Waiting for AER completion... 00:11:49.147 Failure: test_invalid_db_write_overflow_sq 00:11:49.147 00:11:49.147 Executing: test_invalid_db_write_overflow_cq 00:11:49.147 Waiting for AER completion... 00:11:49.147 Failure: test_invalid_db_write_overflow_cq 00:11:49.147 00:11:49.147 18:03:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:49.147 18:03:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:49.147 [2024-05-15 18:03:41.411525] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.121 Executing: test_write_invalid_db 00:11:59.121 Waiting for AER completion... 00:11:59.121 Failure: test_write_invalid_db 00:11:59.121 00:11:59.121 Executing: test_invalid_db_write_overflow_sq 00:11:59.121 Waiting for AER completion... 00:11:59.121 Failure: test_invalid_db_write_overflow_sq 00:11:59.121 00:11:59.122 Executing: test_invalid_db_write_overflow_cq 00:11:59.122 Waiting for AER completion... 00:11:59.122 Failure: test_invalid_db_write_overflow_cq 00:11:59.122 00:11:59.122 ************************************ 00:11:59.122 END TEST nvme_doorbell_aers 00:11:59.122 ************************************ 00:11:59.122 00:11:59.122 real 0m40.262s 00:11:59.122 user 0m33.573s 00:11:59.122 sys 0m6.285s 00:11:59.122 18:03:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:59.122 18:03:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:59.122 18:03:51 nvme -- nvme/nvme.sh@97 -- # uname 00:11:59.122 18:03:51 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:59.122 18:03:51 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:59.122 18:03:51 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:59.122 18:03:51 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:59.122 18:03:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:59.122 ************************************ 00:11:59.122 START TEST nvme_multi_aen 00:11:59.122 ************************************ 00:11:59.122 18:03:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:59.122 [2024-05-15 18:03:51.473913] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.474022] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.474049] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.476274] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.476328] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.476350] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.477715] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.477752] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.477771] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.479087] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.479130] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 [2024-05-15 18:03:51.479150] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69230) is not found. Dropping the request. 00:11:59.122 Child process pid: 69744 00:11:59.381 [Child] Asynchronous Event Request test 00:11:59.381 [Child] Attached to 0000:00:10.0 00:11:59.381 [Child] Attached to 0000:00:11.0 00:11:59.381 [Child] Attached to 0000:00:13.0 00:11:59.381 [Child] Attached to 0000:00:12.0 00:11:59.381 [Child] Registering asynchronous event callbacks... 00:11:59.381 [Child] Getting orig temperature thresholds of all controllers 00:11:59.381 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.381 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.381 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.381 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.381 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:59.381 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.381 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.381 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.381 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.381 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.381 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.381 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.381 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.381 [Child] Cleaning up... 00:11:59.381 Asynchronous Event Request test 00:11:59.381 Attached to 0000:00:10.0 00:11:59.381 Attached to 0000:00:11.0 00:11:59.381 Attached to 0000:00:13.0 00:11:59.382 Attached to 0000:00:12.0 00:11:59.382 Reset controller to setup AER completions for this process 00:11:59.382 Registering asynchronous event callbacks... 00:11:59.382 Getting orig temperature thresholds of all controllers 00:11:59.382 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.382 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.382 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.382 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:59.382 Setting all controllers temperature threshold low to trigger AER 00:11:59.382 Waiting for all controllers temperature threshold to be set lower 00:11:59.382 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.382 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:59.382 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.382 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:59.382 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.382 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:59.382 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:59.382 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:59.382 Waiting for all controllers to trigger AER and reset threshold 00:11:59.382 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.382 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.382 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.382 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:59.382 Cleaning up... 00:11:59.382 ************************************ 00:11:59.382 END TEST nvme_multi_aen 00:11:59.382 ************************************ 00:11:59.382 00:11:59.382 real 0m0.556s 00:11:59.382 user 0m0.202s 00:11:59.382 sys 0m0.254s 00:11:59.382 18:03:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:59.382 18:03:51 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:59.382 18:03:51 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:59.382 18:03:51 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:59.382 18:03:51 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:59.382 18:03:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:59.382 ************************************ 00:11:59.382 START TEST nvme_startup 00:11:59.382 ************************************ 00:11:59.382 18:03:51 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:59.641 Initializing NVMe Controllers 00:11:59.641 Attached to 0000:00:10.0 00:11:59.641 Attached to 0000:00:11.0 00:11:59.641 Attached to 0000:00:13.0 00:11:59.641 Attached to 0000:00:12.0 00:11:59.641 Initialization complete. 00:11:59.641 Time used:190151.703 (us). 00:11:59.641 00:11:59.641 real 0m0.271s 00:11:59.641 user 0m0.103s 00:11:59.641 sys 0m0.126s 00:11:59.641 18:03:52 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:59.641 18:03:52 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:59.641 ************************************ 00:11:59.641 END TEST nvme_startup 00:11:59.641 ************************************ 00:11:59.641 18:03:52 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:59.641 18:03:52 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:59.641 18:03:52 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:59.641 18:03:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:59.641 ************************************ 00:11:59.641 START TEST nvme_multi_secondary 00:11:59.641 ************************************ 00:11:59.641 18:03:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:11:59.641 18:03:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69800 00:11:59.641 18:03:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:59.641 18:03:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69801 00:11:59.641 18:03:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:59.641 18:03:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:02.942 Initializing NVMe Controllers 00:12:02.942 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:02.942 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:02.942 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:02.942 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:02.942 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:02.942 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:02.942 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:02.942 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:02.942 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:02.942 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:02.942 Initialization complete. Launching workers. 00:12:02.942 ======================================================== 00:12:02.942 Latency(us) 00:12:02.942 Device Information : IOPS MiB/s Average min max 00:12:02.942 PCIE (0000:00:10.0) NSID 1 from core 1: 5524.44 21.58 2894.32 1081.25 15525.25 00:12:02.942 PCIE (0000:00:11.0) NSID 1 from core 1: 5524.44 21.58 2895.97 1117.35 15192.68 00:12:02.942 PCIE (0000:00:13.0) NSID 1 from core 1: 5524.44 21.58 2896.27 1115.27 15200.17 00:12:02.942 PCIE (0000:00:12.0) NSID 1 from core 1: 5524.44 21.58 2896.51 1114.49 14871.33 00:12:02.942 PCIE (0000:00:12.0) NSID 2 from core 1: 5529.77 21.60 2894.02 1111.41 15082.11 00:12:02.942 PCIE (0000:00:12.0) NSID 3 from core 1: 5529.77 21.60 2894.39 1099.42 15813.84 00:12:02.942 ======================================================== 00:12:02.942 Total : 33157.30 129.52 2895.25 1081.25 15813.84 00:12:02.942 00:12:03.201 Initializing NVMe Controllers 00:12:03.201 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:03.201 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:03.201 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:03.201 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:03.201 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:03.201 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:03.201 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:03.201 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:03.201 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:03.201 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:03.201 Initialization complete. Launching workers. 00:12:03.201 ======================================================== 00:12:03.201 Latency(us) 00:12:03.201 Device Information : IOPS MiB/s Average min max 00:12:03.201 PCIE (0000:00:10.0) NSID 1 from core 2: 2361.43 9.22 6772.39 1611.99 14643.90 00:12:03.201 PCIE (0000:00:11.0) NSID 1 from core 2: 2361.43 9.22 6774.55 1585.92 13660.68 00:12:03.201 PCIE (0000:00:13.0) NSID 1 from core 2: 2361.43 9.22 6774.96 1669.36 13114.48 00:12:03.201 PCIE (0000:00:12.0) NSID 1 from core 2: 2361.43 9.22 6774.29 1681.64 13672.74 00:12:03.201 PCIE (0000:00:12.0) NSID 2 from core 2: 2361.43 9.22 6774.75 1527.71 13223.40 00:12:03.201 PCIE (0000:00:12.0) NSID 3 from core 2: 2361.43 9.22 6774.66 1330.83 13260.18 00:12:03.201 ======================================================== 00:12:03.201 Total : 14168.61 55.35 6774.27 1330.83 14643.90 00:12:03.201 00:12:03.459 18:03:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69800 00:12:05.362 Initializing NVMe Controllers 00:12:05.362 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:05.362 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:05.362 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:05.362 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:05.362 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:05.362 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:05.362 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:05.362 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:05.362 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:05.362 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:05.362 Initialization complete. Launching workers. 00:12:05.362 ======================================================== 00:12:05.362 Latency(us) 00:12:05.362 Device Information : IOPS MiB/s Average min max 00:12:05.362 PCIE (0000:00:10.0) NSID 1 from core 0: 7806.26 30.49 2047.86 990.01 7010.07 00:12:05.362 PCIE (0000:00:11.0) NSID 1 from core 0: 7806.26 30.49 2049.13 1011.98 7226.28 00:12:05.362 PCIE (0000:00:13.0) NSID 1 from core 0: 7806.26 30.49 2049.09 996.78 7115.62 00:12:05.362 PCIE (0000:00:12.0) NSID 1 from core 0: 7806.26 30.49 2049.06 961.22 7057.53 00:12:05.363 PCIE (0000:00:12.0) NSID 2 from core 0: 7806.26 30.49 2049.02 908.92 7389.87 00:12:05.363 PCIE (0000:00:12.0) NSID 3 from core 0: 7806.26 30.49 2048.97 852.04 6312.57 00:12:05.363 ======================================================== 00:12:05.363 Total : 46837.56 182.96 2048.86 852.04 7389.87 00:12:05.363 00:12:05.363 18:03:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69801 00:12:05.363 18:03:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69871 00:12:05.363 18:03:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:05.363 18:03:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69872 00:12:05.363 18:03:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:05.363 18:03:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:08.696 Initializing NVMe Controllers 00:12:08.696 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:08.696 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:08.696 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:08.696 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:08.696 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:08.696 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:08.696 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:08.696 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:08.696 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:08.696 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:08.696 Initialization complete. Launching workers. 00:12:08.696 ======================================================== 00:12:08.696 Latency(us) 00:12:08.696 Device Information : IOPS MiB/s Average min max 00:12:08.696 PCIE (0000:00:10.0) NSID 1 from core 0: 5680.85 22.19 2814.56 924.95 8336.06 00:12:08.696 PCIE (0000:00:11.0) NSID 1 from core 0: 5680.85 22.19 2816.17 941.38 8647.44 00:12:08.696 PCIE (0000:00:13.0) NSID 1 from core 0: 5680.85 22.19 2816.10 975.80 6827.36 00:12:08.696 PCIE (0000:00:12.0) NSID 1 from core 0: 5680.85 22.19 2816.05 991.92 6760.40 00:12:08.696 PCIE (0000:00:12.0) NSID 2 from core 0: 5680.85 22.19 2815.95 979.33 7192.16 00:12:08.696 PCIE (0000:00:12.0) NSID 3 from core 0: 5680.85 22.19 2815.85 997.08 8138.00 00:12:08.696 ======================================================== 00:12:08.696 Total : 34085.12 133.14 2815.78 924.95 8647.44 00:12:08.696 00:12:08.696 Initializing NVMe Controllers 00:12:08.696 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:08.696 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:08.696 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:08.696 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:08.696 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:08.696 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:08.696 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:08.696 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:08.696 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:08.696 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:08.696 Initialization complete. Launching workers. 00:12:08.696 ======================================================== 00:12:08.696 Latency(us) 00:12:08.696 Device Information : IOPS MiB/s Average min max 00:12:08.696 PCIE (0000:00:10.0) NSID 1 from core 1: 5486.95 21.43 2913.92 1099.92 10362.41 00:12:08.696 PCIE (0000:00:11.0) NSID 1 from core 1: 5486.95 21.43 2914.88 1135.34 9469.51 00:12:08.696 PCIE (0000:00:13.0) NSID 1 from core 1: 5486.95 21.43 2914.53 1110.68 9678.02 00:12:08.696 PCIE (0000:00:12.0) NSID 1 from core 1: 5486.95 21.43 2914.15 1051.07 9704.03 00:12:08.696 PCIE (0000:00:12.0) NSID 2 from core 1: 5486.95 21.43 2913.76 1013.37 9930.38 00:12:08.696 PCIE (0000:00:12.0) NSID 3 from core 1: 5486.95 21.43 2913.42 989.41 9816.53 00:12:08.696 ======================================================== 00:12:08.696 Total : 32921.68 128.60 2914.11 989.41 10362.41 00:12:08.696 00:12:10.595 Initializing NVMe Controllers 00:12:10.595 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.595 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.595 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:10.595 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:10.595 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:10.595 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:10.595 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:10.595 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:10.595 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:10.595 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:10.595 Initialization complete. Launching workers. 00:12:10.595 ======================================================== 00:12:10.595 Latency(us) 00:12:10.595 Device Information : IOPS MiB/s Average min max 00:12:10.595 PCIE (0000:00:10.0) NSID 1 from core 2: 3484.43 13.61 4589.21 1096.18 15185.16 00:12:10.595 PCIE (0000:00:11.0) NSID 1 from core 2: 3484.43 13.61 4591.07 1015.50 15228.61 00:12:10.595 PCIE (0000:00:13.0) NSID 1 from core 2: 3484.43 13.61 4590.96 1018.83 17292.14 00:12:10.595 PCIE (0000:00:12.0) NSID 1 from core 2: 3484.43 13.61 4590.87 926.81 17120.63 00:12:10.595 PCIE (0000:00:12.0) NSID 2 from core 2: 3484.43 13.61 4590.99 841.16 14781.96 00:12:10.595 PCIE (0000:00:12.0) NSID 3 from core 2: 3484.43 13.61 4594.38 1112.19 14162.32 00:12:10.595 ======================================================== 00:12:10.595 Total : 20906.60 81.67 4591.25 841.16 17292.14 00:12:10.595 00:12:10.595 ************************************ 00:12:10.595 END TEST nvme_multi_secondary 00:12:10.595 ************************************ 00:12:10.595 18:04:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69871 00:12:10.595 18:04:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69872 00:12:10.595 00:12:10.595 real 0m10.914s 00:12:10.595 user 0m18.569s 00:12:10.595 sys 0m0.882s 00:12:10.595 18:04:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:10.595 18:04:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:10.595 18:04:03 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:10.595 18:04:03 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:10.595 18:04:03 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/68811 ]] 00:12:10.595 18:04:03 nvme -- common/autotest_common.sh@1086 -- # kill 68811 00:12:10.595 18:04:03 nvme -- common/autotest_common.sh@1087 -- # wait 68811 00:12:10.595 [2024-05-15 18:04:03.088422] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.088481] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.088525] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.088790] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.091090] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.091583] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.091982] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.092198] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.094414] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.094832] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.595 [2024-05-15 18:04:03.095223] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.854 [2024-05-15 18:04:03.095647] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.854 [2024-05-15 18:04:03.098022] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.854 [2024-05-15 18:04:03.098451] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.854 [2024-05-15 18:04:03.098849] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.854 [2024-05-15 18:04:03.098881] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69743) is not found. Dropping the request. 00:12:10.854 18:04:03 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:12:10.854 18:04:03 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:12:10.854 18:04:03 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:10.854 18:04:03 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:10.854 18:04:03 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:10.854 18:04:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 ************************************ 00:12:10.854 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:10.854 ************************************ 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:11.114 * Looking for test storage... 00:12:11.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70027 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70027 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 70027 ']' 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:11.114 18:04:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:11.372 [2024-05-15 18:04:03.645004] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:12:11.372 [2024-05-15 18:04:03.645896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70027 ] 00:12:11.372 [2024-05-15 18:04:03.846039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.631 [2024-05-15 18:04:04.125909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.631 [2024-05-15 18:04:04.125973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.631 [2024-05-15 18:04:04.126113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.631 [2024-05-15 18:04:04.126127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:12.565 nvme0n1 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_H8mNd.txt 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.565 18:04:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:12.565 true 00:12:12.565 18:04:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.565 18:04:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:12.565 18:04:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715796245 00:12:12.565 18:04:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70050 00:12:12.565 18:04:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:12.565 18:04:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:12.565 18:04:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:15.104 [2024-05-15 18:04:07.024698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:15.104 [2024-05-15 18:04:07.025041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:15.104 [2024-05-15 18:04:07.025085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:15.104 [2024-05-15 18:04:07.025106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.104 [2024-05-15 18:04:07.027109] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:15.104 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70050 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70050 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70050 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_H8mNd.txt 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_H8mNd.txt 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70027 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 70027 ']' 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 70027 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70027 00:12:15.104 killing process with pid 70027 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70027' 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 70027 00:12:15.104 18:04:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 70027 00:12:17.008 18:04:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:17.008 18:04:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:17.008 00:12:17.008 real 0m6.040s 00:12:17.008 user 0m20.622s 00:12:17.008 sys 0m0.755s 00:12:17.008 ************************************ 00:12:17.008 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:17.008 ************************************ 00:12:17.008 18:04:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:17.008 18:04:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 18:04:09 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:17.008 18:04:09 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:17.008 18:04:09 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:17.008 18:04:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:17.008 18:04:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:17.008 ************************************ 00:12:17.008 START TEST nvme_fio 00:12:17.008 ************************************ 00:12:17.008 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:12:17.008 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:17.008 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:17.008 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:17.008 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:17.008 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:12:17.008 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:17.008 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:17.008 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:12:17.267 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:12:17.267 18:04:09 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:17.267 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:17.267 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:17.267 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:17.267 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:17.267 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:17.539 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:17.539 18:04:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:17.830 18:04:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:17.830 18:04:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:17.830 18:04:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:17.830 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:17.830 fio-3.35 00:12:17.830 Starting 1 thread 00:12:21.117 00:12:21.117 test: (groupid=0, jobs=1): err= 0: pid=70196: Wed May 15 18:04:13 2024 00:12:21.117 read: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(132MiB/2001msec) 00:12:21.117 slat (usec): min=4, max=675, avg= 6.23, stdev= 4.63 00:12:21.117 clat (usec): min=290, max=9915, avg=3778.72, stdev=521.80 00:12:21.118 lat (usec): min=296, max=9968, avg=3784.95, stdev=522.59 00:12:21.118 clat percentiles (usec): 00:12:21.118 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3392], 00:12:21.118 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3720], 00:12:21.118 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4621], 00:12:21.118 | 99.00th=[ 5145], 99.50th=[ 6325], 99.90th=[ 7701], 99.95th=[ 8586], 00:12:21.118 | 99.99th=[ 9765] 00:12:21.118 bw ( KiB/s): min=64824, max=73624, per=100.00%, avg=68176.00, stdev=4759.72, samples=3 00:12:21.118 iops : min=16206, max=18406, avg=17044.00, stdev=1189.93, samples=3 00:12:21.118 write: IOPS=16.9k, BW=65.9MiB/s (69.1MB/s)(132MiB/2001msec); 0 zone resets 00:12:21.118 slat (usec): min=4, max=166, avg= 6.38, stdev= 2.34 00:12:21.118 clat (usec): min=259, max=9779, avg=3791.33, stdev=521.96 00:12:21.118 lat (usec): min=265, max=9789, avg=3797.71, stdev=522.69 00:12:21.118 clat percentiles (usec): 00:12:21.118 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3425], 00:12:21.118 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3720], 00:12:21.118 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4621], 00:12:21.118 | 99.00th=[ 5145], 99.50th=[ 6325], 99.90th=[ 7701], 99.95th=[ 8717], 00:12:21.118 | 99.99th=[ 9634] 00:12:21.118 bw ( KiB/s): min=64384, max=73208, per=100.00%, avg=68002.67, stdev=4621.03, samples=3 00:12:21.118 iops : min=16096, max=18302, avg=17000.67, stdev=1155.26, samples=3 00:12:21.118 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:21.118 lat (msec) : 2=0.06%, 4=67.24%, 10=32.66% 00:12:21.118 cpu : usr=98.30%, sys=0.40%, ctx=31, majf=0, minf=607 00:12:21.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:21.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.118 issued rwts: total=33680,33736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.118 00:12:21.118 Run status group 0 (all jobs): 00:12:21.118 READ: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=132MiB (138MB), run=2001-2001msec 00:12:21.118 WRITE: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=132MiB (138MB), run=2001-2001msec 00:12:21.412 ----------------------------------------------------- 00:12:21.412 Suppressions used: 00:12:21.412 count bytes template 00:12:21.412 1 32 /usr/src/fio/parse.c 00:12:21.412 1 8 libtcmalloc_minimal.so 00:12:21.412 ----------------------------------------------------- 00:12:21.412 00:12:21.412 18:04:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:21.412 18:04:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:21.412 18:04:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:21.412 18:04:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:21.671 18:04:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:21.671 18:04:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:21.930 18:04:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:21.930 18:04:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:21.930 18:04:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:21.930 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:21.930 fio-3.35 00:12:21.930 Starting 1 thread 00:12:25.222 00:12:25.222 test: (groupid=0, jobs=1): err= 0: pid=70267: Wed May 15 18:04:17 2024 00:12:25.222 read: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(127MiB/2001msec) 00:12:25.222 slat (usec): min=4, max=183, avg= 6.43, stdev= 2.14 00:12:25.222 clat (usec): min=508, max=9449, avg=3928.15, stdev=572.64 00:12:25.222 lat (usec): min=518, max=9514, avg=3934.58, stdev=573.48 00:12:25.222 clat percentiles (usec): 00:12:25.222 | 1.00th=[ 3294], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:12:25.222 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 4047], 00:12:25.222 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4752], 00:12:25.222 | 99.00th=[ 5538], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 8029], 00:12:25.222 | 99.99th=[ 9372] 00:12:25.222 bw ( KiB/s): min=63648, max=68240, per=100.00%, avg=66200.00, stdev=2338.42, samples=3 00:12:25.222 iops : min=15912, max=17060, avg=16550.00, stdev=584.61, samples=3 00:12:25.222 write: IOPS=16.2k, BW=63.3MiB/s (66.4MB/s)(127MiB/2001msec); 0 zone resets 00:12:25.222 slat (nsec): min=4772, max=46020, avg=6613.30, stdev=1907.89 00:12:25.222 clat (usec): min=306, max=9378, avg=3943.42, stdev=581.62 00:12:25.222 lat (usec): min=313, max=9389, avg=3950.03, stdev=582.45 00:12:25.222 clat percentiles (usec): 00:12:25.222 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3490], 00:12:25.222 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 4080], 00:12:25.222 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4752], 00:12:25.222 | 99.00th=[ 5735], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 8225], 00:12:25.222 | 99.99th=[ 9110] 00:12:25.222 bw ( KiB/s): min=63928, max=68000, per=100.00%, avg=66133.33, stdev=2057.02, samples=3 00:12:25.222 iops : min=15982, max=17000, avg=16533.33, stdev=514.25, samples=3 00:12:25.222 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:25.222 lat (msec) : 2=0.04%, 4=58.99%, 10=40.94% 00:12:25.222 cpu : usr=98.55%, sys=0.35%, ctx=5, majf=0, minf=607 00:12:25.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:25.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.222 issued rwts: total=32385,32438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.222 00:12:25.223 Run status group 0 (all jobs): 00:12:25.223 READ: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=127MiB (133MB), run=2001-2001msec 00:12:25.223 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=127MiB (133MB), run=2001-2001msec 00:12:25.482 ----------------------------------------------------- 00:12:25.482 Suppressions used: 00:12:25.482 count bytes template 00:12:25.482 1 32 /usr/src/fio/parse.c 00:12:25.482 1 8 libtcmalloc_minimal.so 00:12:25.482 ----------------------------------------------------- 00:12:25.482 00:12:25.482 18:04:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:25.482 18:04:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:25.482 18:04:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:25.482 18:04:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:25.741 18:04:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:25.741 18:04:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:26.000 18:04:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:26.000 18:04:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:12:26.000 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:26.001 18:04:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:26.260 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:26.260 fio-3.35 00:12:26.260 Starting 1 thread 00:12:30.455 00:12:30.455 test: (groupid=0, jobs=1): err= 0: pid=70329: Wed May 15 18:04:22 2024 00:12:30.455 read: IOPS=16.9k, BW=66.1MiB/s (69.3MB/s)(132MiB/2001msec) 00:12:30.455 slat (usec): min=4, max=297, avg= 6.21, stdev= 2.58 00:12:30.455 clat (usec): min=254, max=9128, avg=3756.94, stdev=495.28 00:12:30.455 lat (usec): min=259, max=9148, avg=3763.15, stdev=495.97 00:12:30.455 clat percentiles (usec): 00:12:30.455 | 1.00th=[ 2933], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:12:30.455 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3687], 00:12:30.455 | 70.00th=[ 3785], 80.00th=[ 4113], 90.00th=[ 4359], 95.00th=[ 4490], 00:12:30.455 | 99.00th=[ 5145], 99.50th=[ 6521], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:30.455 | 99.99th=[ 8848] 00:12:30.455 bw ( KiB/s): min=64520, max=72280, per=100.00%, avg=67880.00, stdev=3983.16, samples=3 00:12:30.455 iops : min=16130, max=18070, avg=16970.00, stdev=995.79, samples=3 00:12:30.455 write: IOPS=17.0k, BW=66.2MiB/s (69.5MB/s)(133MiB/2001msec); 0 zone resets 00:12:30.455 slat (nsec): min=4520, max=85604, avg=6344.20, stdev=1943.93 00:12:30.455 clat (usec): min=341, max=9265, avg=3771.04, stdev=485.02 00:12:30.455 lat (usec): min=347, max=9271, avg=3777.38, stdev=485.68 00:12:30.455 clat percentiles (usec): 00:12:30.455 | 1.00th=[ 2966], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3490], 00:12:30.455 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:12:30.455 | 70.00th=[ 3785], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4490], 00:12:30.455 | 99.00th=[ 5145], 99.50th=[ 6194], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:30.455 | 99.99th=[ 8848] 00:12:30.455 bw ( KiB/s): min=64976, max=71808, per=99.85%, avg=67728.00, stdev=3604.41, samples=3 00:12:30.455 iops : min=16244, max=17952, avg=16932.00, stdev=901.10, samples=3 00:12:30.455 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:12:30.455 lat (msec) : 2=0.05%, 4=77.42%, 10=22.48% 00:12:30.455 cpu : usr=99.05%, sys=0.00%, ctx=2, majf=0, minf=606 00:12:30.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:30.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:30.455 issued rwts: total=33845,33931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:30.455 00:12:30.455 Run status group 0 (all jobs): 00:12:30.455 READ: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=132MiB (139MB), run=2001-2001msec 00:12:30.455 WRITE: bw=66.2MiB/s (69.5MB/s), 66.2MiB/s-66.2MiB/s (69.5MB/s-69.5MB/s), io=133MiB (139MB), run=2001-2001msec 00:12:30.455 ----------------------------------------------------- 00:12:30.455 Suppressions used: 00:12:30.455 count bytes template 00:12:30.455 1 32 /usr/src/fio/parse.c 00:12:30.455 1 8 libtcmalloc_minimal.so 00:12:30.455 ----------------------------------------------------- 00:12:30.455 00:12:30.455 18:04:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:30.455 18:04:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:30.455 18:04:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:30.455 18:04:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:30.455 18:04:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:30.455 18:04:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:30.714 18:04:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:30.714 18:04:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:30.714 18:04:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:30.714 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:30.714 fio-3.35 00:12:30.714 Starting 1 thread 00:12:34.907 00:12:34.908 test: (groupid=0, jobs=1): err= 0: pid=70390: Wed May 15 18:04:27 2024 00:12:34.908 read: IOPS=16.3k, BW=63.5MiB/s (66.6MB/s)(127MiB/2001msec) 00:12:34.908 slat (nsec): min=4737, max=53507, avg=6345.85, stdev=1775.48 00:12:34.908 clat (usec): min=246, max=8741, avg=3911.77, stdev=535.84 00:12:34.908 lat (usec): min=252, max=8795, avg=3918.11, stdev=536.61 00:12:34.908 clat percentiles (usec): 00:12:34.908 | 1.00th=[ 3228], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3556], 00:12:34.908 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3818], 00:12:34.908 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:12:34.908 | 99.00th=[ 5735], 99.50th=[ 7242], 99.90th=[ 7767], 99.95th=[ 7898], 00:12:34.908 | 99.99th=[ 8455] 00:12:34.908 bw ( KiB/s): min=64272, max=66088, per=100.00%, avg=65274.67, stdev=922.69, samples=3 00:12:34.908 iops : min=16068, max=16522, avg=16318.67, stdev=230.67, samples=3 00:12:34.908 write: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(127MiB/2001msec); 0 zone resets 00:12:34.908 slat (nsec): min=4723, max=38607, avg=6516.28, stdev=1798.27 00:12:34.908 clat (usec): min=278, max=8567, avg=3923.18, stdev=532.52 00:12:34.908 lat (usec): min=284, max=8579, avg=3929.69, stdev=533.26 00:12:34.908 clat percentiles (usec): 00:12:34.908 | 1.00th=[ 3261], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:12:34.908 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3851], 00:12:34.908 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:12:34.908 | 99.00th=[ 5800], 99.50th=[ 7177], 99.90th=[ 7767], 99.95th=[ 7832], 00:12:34.908 | 99.99th=[ 8356] 00:12:34.908 bw ( KiB/s): min=64696, max=65688, per=99.83%, avg=65045.33, stdev=557.27, samples=3 00:12:34.908 iops : min=16174, max=16422, avg=16261.33, stdev=139.32, samples=3 00:12:34.908 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:12:34.908 lat (msec) : 2=0.06%, 4=65.09%, 10=34.81% 00:12:34.908 cpu : usr=99.05%, sys=0.00%, ctx=2, majf=0, minf=604 00:12:34.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:34.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.908 issued rwts: total=32522,32594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.908 00:12:34.908 Run status group 0 (all jobs): 00:12:34.908 READ: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=127MiB (133MB), run=2001-2001msec 00:12:34.908 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=127MiB (134MB), run=2001-2001msec 00:12:35.167 ----------------------------------------------------- 00:12:35.167 Suppressions used: 00:12:35.167 count bytes template 00:12:35.167 1 32 /usr/src/fio/parse.c 00:12:35.167 1 8 libtcmalloc_minimal.so 00:12:35.167 ----------------------------------------------------- 00:12:35.167 00:12:35.167 18:04:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:35.167 18:04:27 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:35.167 00:12:35.167 real 0m18.154s 00:12:35.167 user 0m14.288s 00:12:35.167 sys 0m3.148s 00:12:35.167 18:04:27 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:35.167 18:04:27 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:35.167 ************************************ 00:12:35.167 END TEST nvme_fio 00:12:35.167 ************************************ 00:12:35.167 00:12:35.167 real 1m32.032s 00:12:35.167 user 3m44.160s 00:12:35.167 sys 0m16.402s 00:12:35.167 18:04:27 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:35.167 ************************************ 00:12:35.167 END TEST nvme 00:12:35.167 ************************************ 00:12:35.167 18:04:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.440 18:04:27 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:35.440 18:04:27 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:35.440 18:04:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:35.440 18:04:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:35.440 18:04:27 -- common/autotest_common.sh@10 -- # set +x 00:12:35.440 ************************************ 00:12:35.440 START TEST nvme_scc 00:12:35.440 ************************************ 00:12:35.440 18:04:27 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:35.440 * Looking for test storage... 00:12:35.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:35.440 18:04:27 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.440 18:04:27 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.440 18:04:27 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.440 18:04:27 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.440 18:04:27 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.440 18:04:27 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.440 18:04:27 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.440 18:04:27 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:35.440 18:04:27 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:35.440 18:04:27 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:35.440 18:04:27 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.440 18:04:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:35.440 18:04:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:35.440 18:04:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:35.440 18:04:27 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:35.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.958 Waiting for block devices as requested 00:12:35.958 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.217 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.217 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.217 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.490 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:41.490 18:04:33 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:41.490 18:04:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:41.490 18:04:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:41.490 18:04:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:41.490 18:04:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.490 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.491 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:41.492 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.493 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.494 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:41.495 18:04:33 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:41.495 18:04:33 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:41.495 18:04:33 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:41.495 18:04:33 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:41.495 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.496 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:41.497 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:41.761 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:41.762 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:41.763 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:41.764 18:04:34 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:41.764 18:04:34 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:41.764 18:04:34 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:41.764 18:04:34 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:41.764 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:41.765 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.766 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:41.767 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:41.768 18:04:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:41.769 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.770 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:41.771 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:41.772 18:04:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:42.033 18:04:34 nvme_scc -- scripts/common.sh@15 -- # local i 00:12:42.033 18:04:34 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:42.033 18:04:34 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:42.033 18:04:34 nvme_scc -- scripts/common.sh@24 -- # return 0 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:42.033 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.034 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.035 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:42.036 18:04:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.036 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:12:42.037 18:04:34 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:12:42.037 18:04:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:42.037 18:04:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:42.037 18:04:34 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:42.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:43.189 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.189 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.189 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.189 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.189 18:04:35 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:43.189 18:04:35 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:43.189 18:04:35 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.189 18:04:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:43.189 ************************************ 00:12:43.189 START TEST nvme_simple_copy 00:12:43.189 ************************************ 00:12:43.189 18:04:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:43.780 Initializing NVMe Controllers 00:12:43.780 Attaching to 0000:00:10.0 00:12:43.780 Controller supports SCC. Attached to 0000:00:10.0 00:12:43.780 Namespace ID: 1 size: 6GB 00:12:43.780 Initialization complete. 00:12:43.780 00:12:43.780 Controller QEMU NVMe Ctrl (12340 ) 00:12:43.780 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:43.780 Namespace Block Size:4096 00:12:43.780 Writing LBAs 0 to 63 with Random Data 00:12:43.780 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:43.780 LBAs matching Written Data: 64 00:12:43.780 00:12:43.780 real 0m0.316s 00:12:43.780 user 0m0.119s 00:12:43.780 sys 0m0.095s 00:12:43.780 18:04:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.780 18:04:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.780 ************************************ 00:12:43.780 END TEST nvme_simple_copy 00:12:43.780 ************************************ 00:12:43.780 00:12:43.780 real 0m8.325s 00:12:43.780 user 0m1.394s 00:12:43.780 sys 0m1.808s 00:12:43.780 18:04:36 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.780 18:04:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:43.780 ************************************ 00:12:43.780 END TEST nvme_scc 00:12:43.780 ************************************ 00:12:43.780 18:04:36 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:43.780 18:04:36 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:43.780 18:04:36 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:43.780 18:04:36 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:43.780 18:04:36 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:43.780 18:04:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:43.780 18:04:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.780 18:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:43.780 ************************************ 00:12:43.780 START TEST nvme_fdp 00:12:43.780 ************************************ 00:12:43.780 18:04:36 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:12:43.780 * Looking for test storage... 00:12:43.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:43.780 18:04:36 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:43.780 18:04:36 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:43.780 18:04:36 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:43.780 18:04:36 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:43.780 18:04:36 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.780 18:04:36 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.780 18:04:36 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.780 18:04:36 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.780 18:04:36 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.781 18:04:36 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.781 18:04:36 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.781 18:04:36 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:43.781 18:04:36 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:43.781 18:04:36 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:43.781 18:04:36 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.781 18:04:36 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:44.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:44.327 Waiting for block devices as requested 00:12:44.327 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.592 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.592 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.592 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.873 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:49.873 18:04:42 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:49.873 18:04:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:49.873 18:04:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:49.873 18:04:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:49.873 18:04:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:49.873 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:49.874 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.875 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.876 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.877 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:49.878 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:49.879 18:04:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:49.879 18:04:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:49.879 18:04:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:49.879 18:04:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:49.879 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.880 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:49.881 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:49.882 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.883 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:49.884 18:04:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:49.884 18:04:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:49.884 18:04:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:49.884 18:04:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:49.884 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.885 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.886 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:49.887 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.888 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:49.889 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:49.890 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.175 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.176 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:50.177 18:04:42 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:50.177 18:04:42 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:50.177 18:04:42 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:50.177 18:04:42 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.177 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:50.178 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:50.179 18:04:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:12:50.179 18:04:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:50.179 18:04:42 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:50.179 18:04:42 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:50.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:51.317 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.317 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.317 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.317 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:51.317 18:04:43 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:51.317 18:04:43 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:51.317 18:04:43 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.318 18:04:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:51.318 ************************************ 00:12:51.318 START TEST nvme_flexible_data_placement 00:12:51.318 ************************************ 00:12:51.318 18:04:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:51.576 Initializing NVMe Controllers 00:12:51.576 Attaching to 0000:00:13.0 00:12:51.576 Controller supports FDP Attached to 0000:00:13.0 00:12:51.576 Namespace ID: 1 Endurance Group ID: 1 00:12:51.576 Initialization complete. 00:12:51.576 00:12:51.576 ================================== 00:12:51.576 == FDP tests for Namespace: #01 == 00:12:51.576 ================================== 00:12:51.576 00:12:51.576 Get Feature: FDP: 00:12:51.576 ================= 00:12:51.576 Enabled: Yes 00:12:51.576 FDP configuration Index: 0 00:12:51.576 00:12:51.576 FDP configurations log page 00:12:51.576 =========================== 00:12:51.576 Number of FDP configurations: 1 00:12:51.576 Version: 0 00:12:51.576 Size: 112 00:12:51.576 FDP Configuration Descriptor: 0 00:12:51.576 Descriptor Size: 96 00:12:51.576 Reclaim Group Identifier format: 2 00:12:51.576 FDP Volatile Write Cache: Not Present 00:12:51.576 FDP Configuration: Valid 00:12:51.576 Vendor Specific Size: 0 00:12:51.576 Number of Reclaim Groups: 2 00:12:51.576 Number of Recalim Unit Handles: 8 00:12:51.576 Max Placement Identifiers: 128 00:12:51.576 Number of Namespaces Suppprted: 256 00:12:51.576 Reclaim unit Nominal Size: 6000000 bytes 00:12:51.576 Estimated Reclaim Unit Time Limit: Not Reported 00:12:51.576 RUH Desc #000: RUH Type: Initially Isolated 00:12:51.576 RUH Desc #001: RUH Type: Initially Isolated 00:12:51.576 RUH Desc #002: RUH Type: Initially Isolated 00:12:51.576 RUH Desc #003: RUH Type: Initially Isolated 00:12:51.576 RUH Desc #004: RUH Type: Initially Isolated 00:12:51.576 RUH Desc #005: RUH Type: Initially Isolated 00:12:51.576 RUH Desc #006: RUH Type: Initially Isolated 00:12:51.576 RUH Desc #007: RUH Type: Initially Isolated 00:12:51.576 00:12:51.576 FDP reclaim unit handle usage log page 00:12:51.576 ====================================== 00:12:51.576 Number of Reclaim Unit Handles: 8 00:12:51.576 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:51.576 RUH Usage Desc #001: RUH Attributes: Unused 00:12:51.576 RUH Usage Desc #002: RUH Attributes: Unused 00:12:51.576 RUH Usage Desc #003: RUH Attributes: Unused 00:12:51.576 RUH Usage Desc #004: RUH Attributes: Unused 00:12:51.576 RUH Usage Desc #005: RUH Attributes: Unused 00:12:51.576 RUH Usage Desc #006: RUH Attributes: Unused 00:12:51.576 RUH Usage Desc #007: RUH Attributes: Unused 00:12:51.576 00:12:51.576 FDP statistics log page 00:12:51.576 ======================= 00:12:51.576 Host bytes with metadata written: 802672640 00:12:51.576 Media bytes with metadata written: 802820096 00:12:51.576 Media bytes erased: 0 00:12:51.576 00:12:51.576 FDP Reclaim unit handle status 00:12:51.576 ============================== 00:12:51.576 Number of RUHS descriptors: 2 00:12:51.576 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000283 00:12:51.576 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:51.576 00:12:51.576 FDP write on placement id: 0 success 00:12:51.576 00:12:51.576 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:51.576 00:12:51.576 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:51.576 00:12:51.576 Get Feature: FDP Events for Placement handle: #0 00:12:51.576 ======================== 00:12:51.576 Number of FDP Events: 6 00:12:51.576 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:51.576 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:51.576 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:51.576 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:51.576 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:51.576 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:51.576 00:12:51.576 FDP events log page 00:12:51.576 =================== 00:12:51.576 Number of FDP events: 1 00:12:51.576 FDP Event #0: 00:12:51.576 Event Type: RU Not Written to Capacity 00:12:51.576 Placement Identifier: Valid 00:12:51.576 NSID: Valid 00:12:51.576 Location: Valid 00:12:51.576 Placement Identifier: 0 00:12:51.576 Event Timestamp: 9 00:12:51.576 Namespace Identifier: 1 00:12:51.576 Reclaim Group Identifier: 0 00:12:51.576 Reclaim Unit Handle Identifier: 0 00:12:51.576 00:12:51.576 FDP test passed 00:12:51.835 00:12:51.835 real 0m0.293s 00:12:51.835 user 0m0.106s 00:12:51.835 sys 0m0.084s 00:12:51.835 18:04:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.835 18:04:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:51.835 ************************************ 00:12:51.835 END TEST nvme_flexible_data_placement 00:12:51.835 ************************************ 00:12:51.835 ************************************ 00:12:51.835 END TEST nvme_fdp 00:12:51.835 ************************************ 00:12:51.835 00:12:51.835 real 0m8.045s 00:12:51.835 user 0m1.290s 00:12:51.835 sys 0m1.737s 00:12:51.835 18:04:44 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.835 18:04:44 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:51.835 18:04:44 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:51.835 18:04:44 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:51.835 18:04:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.835 18:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.835 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.835 ************************************ 00:12:51.835 START TEST nvme_rpc 00:12:51.835 ************************************ 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:51.835 * Looking for test storage... 00:12:51.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:51.835 18:04:44 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.835 18:04:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:51.835 18:04:44 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:12:52.094 18:04:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:52.094 18:04:44 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71728 00:12:52.094 18:04:44 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:52.094 18:04:44 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:52.094 18:04:44 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71728 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 71728 ']' 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.094 18:04:44 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.094 [2024-05-15 18:04:44.461988] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:12:52.094 [2024-05-15 18:04:44.462174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71728 ] 00:12:52.353 [2024-05-15 18:04:44.636976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:52.612 [2024-05-15 18:04:44.917824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.612 [2024-05-15 18:04:44.917840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.179 18:04:45 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.180 18:04:45 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:53.180 18:04:45 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:53.746 Nvme0n1 00:12:53.746 18:04:46 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:53.746 18:04:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:54.005 request: 00:12:54.005 { 00:12:54.005 "filename": "non_existing_file", 00:12:54.005 "bdev_name": "Nvme0n1", 00:12:54.005 "method": "bdev_nvme_apply_firmware", 00:12:54.005 "req_id": 1 00:12:54.005 } 00:12:54.005 Got JSON-RPC error response 00:12:54.005 response: 00:12:54.005 { 00:12:54.005 "code": -32603, 00:12:54.005 "message": "open file failed." 00:12:54.005 } 00:12:54.005 18:04:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:54.005 18:04:46 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:54.005 18:04:46 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:54.324 18:04:46 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:54.324 18:04:46 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71728 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 71728 ']' 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 71728 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71728 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:54.324 killing process with pid 71728 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71728' 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@965 -- # kill 71728 00:12:54.324 18:04:46 nvme_rpc -- common/autotest_common.sh@970 -- # wait 71728 00:12:56.230 00:12:56.230 real 0m4.413s 00:12:56.230 user 0m8.173s 00:12:56.230 sys 0m0.738s 00:12:56.230 18:04:48 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:56.230 18:04:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.230 ************************************ 00:12:56.230 END TEST nvme_rpc 00:12:56.230 ************************************ 00:12:56.230 18:04:48 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:56.230 18:04:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:56.230 18:04:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.230 18:04:48 -- common/autotest_common.sh@10 -- # set +x 00:12:56.230 ************************************ 00:12:56.230 START TEST nvme_rpc_timeouts 00:12:56.230 ************************************ 00:12:56.230 18:04:48 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:56.230 * Looking for test storage... 00:12:56.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:56.230 18:04:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.230 18:04:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71799 00:12:56.230 18:04:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71799 00:12:56.230 18:04:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71827 00:12:56.230 18:04:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:56.230 18:04:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71827 00:12:56.230 18:04:48 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 71827 ']' 00:12:56.230 18:04:48 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.230 18:04:48 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.230 18:04:48 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.230 18:04:48 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.230 18:04:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:56.230 18:04:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:56.489 [2024-05-15 18:04:48.858357] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:12:56.489 [2024-05-15 18:04:48.858556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71827 ] 00:12:56.749 [2024-05-15 18:04:49.030710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.008 [2024-05-15 18:04:49.252383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.008 [2024-05-15 18:04:49.252402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.574 18:04:50 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.574 Checking default timeout settings: 00:12:57.574 18:04:50 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:12:57.574 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:57.574 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:58.143 Making settings changes with rpc: 00:12:58.143 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:58.143 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:58.143 Check default vs. modified settings: 00:12:58.143 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:58.143 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71799 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71799 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:58.736 Setting action_on_timeout is changed as expected. 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71799 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71799 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:58.736 18:04:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:58.736 Setting timeout_us is changed as expected. 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71799 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71799 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:58.736 Setting timeout_admin_us is changed as expected. 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71799 /tmp/settings_modified_71799 00:12:58.736 18:04:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71827 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 71827 ']' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 71827 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71827 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.736 killing process with pid 71827 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71827' 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 71827 00:12:58.736 18:04:51 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 71827 00:13:01.269 RPC TIMEOUT SETTING TEST PASSED. 00:13:01.269 18:04:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:01.269 00:13:01.269 real 0m4.552s 00:13:01.269 user 0m8.581s 00:13:01.269 sys 0m0.752s 00:13:01.269 18:04:53 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.269 18:04:53 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 ************************************ 00:13:01.269 END TEST nvme_rpc_timeouts 00:13:01.269 ************************************ 00:13:01.269 18:04:53 -- spdk/autotest.sh@239 -- # uname -s 00:13:01.269 18:04:53 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:01.269 18:04:53 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:01.269 18:04:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:01.269 18:04:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.269 18:04:53 -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 ************************************ 00:13:01.269 START TEST sw_hotplug 00:13:01.269 ************************************ 00:13:01.269 18:04:53 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:01.269 * Looking for test storage... 00:13:01.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:01.269 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:01.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:01.530 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:01.530 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:01.530 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:01.530 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:01.530 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:13:01.530 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:13:01.530 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:13:01.530 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@230 -- # local class 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:13:01.530 18:04:53 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:01.530 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:13:01.530 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:01.530 18:04:53 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:13:01.530 18:04:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=72191 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:13:01.530 18:04:54 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:13:01.530 18:04:54 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:13:01.530 18:04:54 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:13:01.530 18:04:54 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:13:01.530 18:04:54 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:13:01.789 Initializing NVMe Controllers 00:13:01.789 Attaching to 0000:00:10.0 00:13:01.789 Attaching to 0000:00:11.0 00:13:01.789 Attaching to 0000:00:12.0 00:13:01.789 Attaching to 0000:00:13.0 00:13:01.789 Attached to 0000:00:10.0 00:13:01.789 Attached to 0000:00:11.0 00:13:01.789 Attached to 0000:00:13.0 00:13:01.789 Attached to 0000:00:12.0 00:13:01.789 Initialization complete. Starting I/O... 00:13:01.789 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:01.789 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:01.789 QEMU NVMe Ctrl (12343 ): 0 I/Os completed (+0) 00:13:01.789 QEMU NVMe Ctrl (12342 ): 0 I/Os completed (+0) 00:13:01.789 00:13:03.166 QEMU NVMe Ctrl (12340 ): 1156 I/Os completed (+1156) 00:13:03.166 QEMU NVMe Ctrl (12341 ): 1198 I/Os completed (+1198) 00:13:03.166 QEMU NVMe Ctrl (12343 ): 1185 I/Os completed (+1185) 00:13:03.166 QEMU NVMe Ctrl (12342 ): 1211 I/Os completed (+1211) 00:13:03.166 00:13:04.103 QEMU NVMe Ctrl (12340 ): 2472 I/Os completed (+1316) 00:13:04.103 QEMU NVMe Ctrl (12341 ): 2572 I/Os completed (+1374) 00:13:04.103 QEMU NVMe Ctrl (12343 ): 2608 I/Os completed (+1423) 00:13:04.103 QEMU NVMe Ctrl (12342 ): 2575 I/Os completed (+1364) 00:13:04.103 00:13:05.039 QEMU NVMe Ctrl (12340 ): 4080 I/Os completed (+1608) 00:13:05.039 QEMU NVMe Ctrl (12341 ): 4300 I/Os completed (+1728) 00:13:05.039 QEMU NVMe Ctrl (12343 ): 4334 I/Os completed (+1726) 00:13:05.039 QEMU NVMe Ctrl (12342 ): 4296 I/Os completed (+1721) 00:13:05.039 00:13:05.976 QEMU NVMe Ctrl (12340 ): 5749 I/Os completed (+1669) 00:13:05.976 QEMU NVMe Ctrl (12341 ): 6091 I/Os completed (+1791) 00:13:05.976 QEMU NVMe Ctrl (12343 ): 6094 I/Os completed (+1760) 00:13:05.976 QEMU NVMe Ctrl (12342 ): 6086 I/Os completed (+1790) 00:13:05.976 00:13:06.912 QEMU NVMe Ctrl (12340 ): 7074 I/Os completed (+1325) 00:13:06.912 QEMU NVMe Ctrl (12341 ): 7527 I/Os completed (+1436) 00:13:06.912 QEMU NVMe Ctrl (12343 ): 7456 I/Os completed (+1362) 00:13:06.912 QEMU NVMe Ctrl (12342 ): 7484 I/Os completed (+1398) 00:13:06.912 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:07.849 [2024-05-15 18:05:00.014570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:07.849 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:07.849 [2024-05-15 18:05:00.017206] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.017369] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.017435] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.017476] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:07.849 [2024-05-15 18:05:00.022104] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.022164] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.022195] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.022217] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:07.849 [2024-05-15 18:05:00.048082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:07.849 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:07.849 [2024-05-15 18:05:00.050086] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.050165] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.050194] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.050223] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:07.849 [2024-05-15 18:05:00.053490] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.053551] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.053576] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 [2024-05-15 18:05:00.053600] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:13:07.849 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:07.849 EAL: Scan for (pci) bus failed. 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:13:07.849 QEMU NVMe Ctrl (12343 ): 9080 I/Os completed (+1624) 00:13:07.849 QEMU NVMe Ctrl (12342 ): 9148 I/Os completed (+1664) 00:13:07.849 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:13:07.849 Attaching to 0000:00:10.0 00:13:07.849 Attached to 0000:00:10.0 00:13:07.849 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:13:08.109 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:13:08.109 18:05:00 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:13:08.109 Attaching to 0000:00:11.0 00:13:08.109 Attached to 0000:00:11.0 00:13:09.044 QEMU NVMe Ctrl (12343 ): 10511 I/Os completed (+1431) 00:13:09.044 QEMU NVMe Ctrl (12342 ): 10730 I/Os completed (+1582) 00:13:09.044 QEMU NVMe Ctrl (12340 ): 1597 I/Os completed (+1597) 00:13:09.044 QEMU NVMe Ctrl (12341 ): 1386 I/Os completed (+1386) 00:13:09.044 00:13:09.980 QEMU NVMe Ctrl (12343 ): 12077 I/Os completed (+1566) 00:13:09.980 QEMU NVMe Ctrl (12342 ): 12353 I/Os completed (+1623) 00:13:09.980 QEMU NVMe Ctrl (12340 ): 3188 I/Os completed (+1591) 00:13:09.980 QEMU NVMe Ctrl (12341 ): 3005 I/Os completed (+1619) 00:13:09.980 00:13:10.915 QEMU NVMe Ctrl (12343 ): 13672 I/Os completed (+1595) 00:13:10.915 QEMU NVMe Ctrl (12342 ): 14018 I/Os completed (+1665) 00:13:10.915 QEMU NVMe Ctrl (12340 ): 4813 I/Os completed (+1625) 00:13:10.915 QEMU NVMe Ctrl (12341 ): 4680 I/Os completed (+1675) 00:13:10.915 00:13:11.851 QEMU NVMe Ctrl (12343 ): 15125 I/Os completed (+1453) 00:13:11.851 QEMU NVMe Ctrl (12342 ): 15594 I/Os completed (+1576) 00:13:11.851 QEMU NVMe Ctrl (12340 ): 6308 I/Os completed (+1495) 00:13:11.851 QEMU NVMe Ctrl (12341 ): 6210 I/Os completed (+1530) 00:13:11.851 00:13:12.789 QEMU NVMe Ctrl (12343 ): 16508 I/Os completed (+1383) 00:13:12.789 QEMU NVMe Ctrl (12342 ): 17136 I/Os completed (+1542) 00:13:12.789 QEMU NVMe Ctrl (12340 ): 7733 I/Os completed (+1425) 00:13:12.789 QEMU NVMe Ctrl (12341 ): 7654 I/Os completed (+1444) 00:13:12.789 00:13:14.167 QEMU NVMe Ctrl (12343 ): 18120 I/Os completed (+1612) 00:13:14.167 QEMU NVMe Ctrl (12342 ): 18778 I/Os completed (+1642) 00:13:14.167 QEMU NVMe Ctrl (12340 ): 9359 I/Os completed (+1626) 00:13:14.167 QEMU NVMe Ctrl (12341 ): 9275 I/Os completed (+1621) 00:13:14.167 00:13:15.103 QEMU NVMe Ctrl (12343 ): 19605 I/Os completed (+1485) 00:13:15.103 QEMU NVMe Ctrl (12342 ): 20352 I/Os completed (+1574) 00:13:15.103 QEMU NVMe Ctrl (12340 ): 10863 I/Os completed (+1504) 00:13:15.103 QEMU NVMe Ctrl (12341 ): 10779 I/Os completed (+1504) 00:13:15.103 00:13:16.039 QEMU NVMe Ctrl (12343 ): 21045 I/Os completed (+1440) 00:13:16.039 QEMU NVMe Ctrl (12342 ): 21860 I/Os completed (+1508) 00:13:16.039 QEMU NVMe Ctrl (12340 ): 12331 I/Os completed (+1468) 00:13:16.039 QEMU NVMe Ctrl (12341 ): 12250 I/Os completed (+1471) 00:13:16.039 00:13:16.974 QEMU NVMe Ctrl (12343 ): 22337 I/Os completed (+1292) 00:13:16.974 QEMU NVMe Ctrl (12342 ): 23321 I/Os completed (+1461) 00:13:16.974 QEMU NVMe Ctrl (12340 ): 13712 I/Os completed (+1381) 00:13:16.974 QEMU NVMe Ctrl (12341 ): 13657 I/Os completed (+1407) 00:13:16.974 00:13:17.909 QEMU NVMe Ctrl (12343 ): 23729 I/Os completed (+1392) 00:13:17.909 QEMU NVMe Ctrl (12342 ): 24745 I/Os completed (+1424) 00:13:17.909 QEMU NVMe Ctrl (12340 ): 15127 I/Os completed (+1415) 00:13:17.909 QEMU NVMe Ctrl (12341 ): 15088 I/Os completed (+1431) 00:13:17.909 00:13:18.844 QEMU NVMe Ctrl (12343 ): 25011 I/Os completed (+1282) 00:13:18.844 QEMU NVMe Ctrl (12342 ): 26186 I/Os completed (+1441) 00:13:18.844 QEMU NVMe Ctrl (12340 ): 16458 I/Os completed (+1331) 00:13:18.844 QEMU NVMe Ctrl (12341 ): 16440 I/Os completed (+1352) 00:13:18.844 00:13:19.780 QEMU NVMe Ctrl (12343 ): 26354 I/Os completed (+1343) 00:13:19.780 QEMU NVMe Ctrl (12342 ): 27631 I/Os completed (+1445) 00:13:19.780 QEMU NVMe Ctrl (12340 ): 17846 I/Os completed (+1388) 00:13:19.780 QEMU NVMe Ctrl (12341 ): 17860 I/Os completed (+1420) 00:13:19.780 00:13:20.038 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:13:20.038 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:20.038 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:20.038 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:20.038 [2024-05-15 18:05:12.362505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:20.038 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:20.038 [2024-05-15 18:05:12.365675] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.038 [2024-05-15 18:05:12.365780] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.038 [2024-05-15 18:05:12.365846] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.038 [2024-05-15 18:05:12.365901] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.038 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:20.038 [2024-05-15 18:05:12.369687] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.369771] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.369822] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.369847] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:20.039 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:20.039 [2024-05-15 18:05:12.396904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:20.039 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:20.039 [2024-05-15 18:05:12.399565] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.399670] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.399715] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.399748] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:20.039 [2024-05-15 18:05:12.403056] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.403126] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.403158] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 [2024-05-15 18:05:12.403190] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.039 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:13:20.039 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:13:20.039 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:13:20.039 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:13:20.039 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:13:20.297 Attaching to 0000:00:10.0 00:13:20.297 Attached to 0000:00:10.0 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:13:20.297 18:05:12 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:13:20.297 Attaching to 0000:00:11.0 00:13:20.297 Attached to 0000:00:11.0 00:13:20.923 QEMU NVMe Ctrl (12343 ): 27761 I/Os completed (+1407) 00:13:20.923 QEMU NVMe Ctrl (12342 ): 29165 I/Os completed (+1534) 00:13:20.923 QEMU NVMe Ctrl (12340 ): 909 I/Os completed (+909) 00:13:20.923 QEMU NVMe Ctrl (12341 ): 801 I/Os completed (+801) 00:13:20.923 00:13:21.861 QEMU NVMe Ctrl (12343 ): 29280 I/Os completed (+1519) 00:13:21.861 QEMU NVMe Ctrl (12342 ): 30713 I/Os completed (+1548) 00:13:21.861 QEMU NVMe Ctrl (12340 ): 2453 I/Os completed (+1544) 00:13:21.861 QEMU NVMe Ctrl (12341 ): 2334 I/Os completed (+1533) 00:13:21.861 00:13:22.799 QEMU NVMe Ctrl (12343 ): 30936 I/Os completed (+1656) 00:13:22.799 QEMU NVMe Ctrl (12342 ): 32391 I/Os completed (+1678) 00:13:22.799 QEMU NVMe Ctrl (12340 ): 4118 I/Os completed (+1665) 00:13:22.799 QEMU NVMe Ctrl (12341 ): 4005 I/Os completed (+1671) 00:13:22.799 00:13:24.175 QEMU NVMe Ctrl (12343 ): 32538 I/Os completed (+1602) 00:13:24.175 QEMU NVMe Ctrl (12342 ): 34027 I/Os completed (+1636) 00:13:24.175 QEMU NVMe Ctrl (12340 ): 5745 I/Os completed (+1627) 00:13:24.175 QEMU NVMe Ctrl (12341 ): 5624 I/Os completed (+1619) 00:13:24.175 00:13:25.112 QEMU NVMe Ctrl (12343 ): 34094 I/Os completed (+1556) 00:13:25.112 QEMU NVMe Ctrl (12342 ): 35588 I/Os completed (+1561) 00:13:25.112 QEMU NVMe Ctrl (12340 ): 7307 I/Os completed (+1562) 00:13:25.112 QEMU NVMe Ctrl (12341 ): 7196 I/Os completed (+1572) 00:13:25.112 00:13:26.076 QEMU NVMe Ctrl (12343 ): 35553 I/Os completed (+1459) 00:13:26.076 QEMU NVMe Ctrl (12342 ): 37164 I/Os completed (+1576) 00:13:26.076 QEMU NVMe Ctrl (12340 ): 8806 I/Os completed (+1499) 00:13:26.076 QEMU NVMe Ctrl (12341 ): 8716 I/Os completed (+1520) 00:13:26.076 00:13:27.013 QEMU NVMe Ctrl (12343 ): 37037 I/Os completed (+1484) 00:13:27.013 QEMU NVMe Ctrl (12342 ): 38738 I/Os completed (+1574) 00:13:27.013 QEMU NVMe Ctrl (12340 ): 10340 I/Os completed (+1534) 00:13:27.013 QEMU NVMe Ctrl (12341 ): 10242 I/Os completed (+1526) 00:13:27.013 00:13:27.950 QEMU NVMe Ctrl (12343 ): 38648 I/Os completed (+1611) 00:13:27.950 QEMU NVMe Ctrl (12342 ): 40432 I/Os completed (+1694) 00:13:27.950 QEMU NVMe Ctrl (12340 ): 12005 I/Os completed (+1665) 00:13:27.950 QEMU NVMe Ctrl (12341 ): 11915 I/Os completed (+1673) 00:13:27.950 00:13:28.886 QEMU NVMe Ctrl (12343 ): 40166 I/Os completed (+1518) 00:13:28.886 QEMU NVMe Ctrl (12342 ): 42025 I/Os completed (+1593) 00:13:28.886 QEMU NVMe Ctrl (12340 ): 13575 I/Os completed (+1570) 00:13:28.886 QEMU NVMe Ctrl (12341 ): 13510 I/Os completed (+1595) 00:13:28.886 00:13:29.844 QEMU NVMe Ctrl (12343 ): 41587 I/Os completed (+1421) 00:13:29.844 QEMU NVMe Ctrl (12342 ): 43605 I/Os completed (+1580) 00:13:29.844 QEMU NVMe Ctrl (12340 ): 15064 I/Os completed (+1489) 00:13:29.844 QEMU NVMe Ctrl (12341 ): 14962 I/Os completed (+1452) 00:13:29.844 00:13:30.779 QEMU NVMe Ctrl (12343 ): 43055 I/Os completed (+1468) 00:13:30.779 QEMU NVMe Ctrl (12342 ): 45216 I/Os completed (+1611) 00:13:30.779 QEMU NVMe Ctrl (12340 ): 16610 I/Os completed (+1546) 00:13:30.779 QEMU NVMe Ctrl (12341 ): 16503 I/Os completed (+1541) 00:13:30.779 00:13:31.790 QEMU NVMe Ctrl (12343 ): 44592 I/Os completed (+1537) 00:13:31.790 QEMU NVMe Ctrl (12342 ): 46790 I/Os completed (+1574) 00:13:31.790 QEMU NVMe Ctrl (12340 ): 18180 I/Os completed (+1570) 00:13:31.790 QEMU NVMe Ctrl (12341 ): 18085 I/Os completed (+1582) 00:13:31.790 00:13:32.357 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:13:32.357 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:32.357 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:32.357 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:32.357 [2024-05-15 18:05:24.704404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:32.357 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:32.357 [2024-05-15 18:05:24.709497] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 [2024-05-15 18:05:24.709563] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 [2024-05-15 18:05:24.709595] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 [2024-05-15 18:05:24.709618] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:32.357 [2024-05-15 18:05:24.712494] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 [2024-05-15 18:05:24.712548] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 [2024-05-15 18:05:24.712576] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 [2024-05-15 18:05:24.712596] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:32.357 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:32.357 [2024-05-15 18:05:24.735356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:32.357 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:32.357 [2024-05-15 18:05:24.737318] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.357 [2024-05-15 18:05:24.737382] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.358 [2024-05-15 18:05:24.737413] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.358 [2024-05-15 18:05:24.737439] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.358 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:32.358 [2024-05-15 18:05:24.743062] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.358 [2024-05-15 18:05:24.743119] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.358 [2024-05-15 18:05:24.743145] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.358 [2024-05-15 18:05:24.743172] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.358 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:13:32.358 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:13:32.358 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:13:32.358 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:13:32.358 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:13:32.616 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:13:32.616 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:13:32.616 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:13:32.616 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:13:32.616 18:05:24 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:13:32.616 Attaching to 0000:00:10.0 00:13:32.616 Attached to 0000:00:10.0 00:13:32.616 18:05:25 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:13:32.616 18:05:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:13:32.616 18:05:25 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:13:32.616 Attaching to 0000:00:11.0 00:13:32.616 Attached to 0000:00:11.0 00:13:32.616 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:13:32.616 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:13:32.616 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:32.616 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:32.616 [2024-05-15 18:05:25.072493] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:44.821 18:05:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:13:44.821 18:05:37 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:44.821 18:05:37 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.04 00:13:44.821 18:05:37 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.04 00:13:44.821 18:05:37 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.04 00:13:44.821 18:05:37 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.04 2 00:13:44.821 remove_attach_helper took 43.04s to complete (handling 2 nvme drive(s)) 18:05:37 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 72191 00:13:51.380 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (72191) - No such process 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 72191 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=72732 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:51.380 18:05:43 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 72732 00:13:51.380 18:05:43 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 72732 ']' 00:13:51.380 18:05:43 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.380 18:05:43 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.380 18:05:43 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.380 18:05:43 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.380 18:05:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:51.380 [2024-05-15 18:05:43.186981] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:13:51.380 [2024-05-15 18:05:43.187174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72732 ] 00:13:51.380 [2024-05-15 18:05:43.356515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.380 [2024-05-15 18:05:43.629868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.315 18:05:44 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.315 18:05:44 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:13:52.315 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:13:52.315 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:13:52.315 18:05:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.315 18:05:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.315 Nvme00n1 00:13:52.315 18:05:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.316 [ 00:13:52.316 { 00:13:52.316 "name": "Nvme00n1", 00:13:52.316 "aliases": [ 00:13:52.316 "f836fcac-775e-4d40-b2f9-a377c5706dba" 00:13:52.316 ], 00:13:52.316 "product_name": "NVMe disk", 00:13:52.316 "block_size": 4096, 00:13:52.316 "num_blocks": 1548666, 00:13:52.316 "uuid": "f836fcac-775e-4d40-b2f9-a377c5706dba", 00:13:52.316 "md_size": 64, 00:13:52.316 "md_interleave": false, 00:13:52.316 "dif_type": 0, 00:13:52.316 "assigned_rate_limits": { 00:13:52.316 "rw_ios_per_sec": 0, 00:13:52.316 "rw_mbytes_per_sec": 0, 00:13:52.316 "r_mbytes_per_sec": 0, 00:13:52.316 "w_mbytes_per_sec": 0 00:13:52.316 }, 00:13:52.316 "claimed": false, 00:13:52.316 "zoned": false, 00:13:52.316 "supported_io_types": { 00:13:52.316 "read": true, 00:13:52.316 "write": true, 00:13:52.316 "unmap": true, 00:13:52.316 "write_zeroes": true, 00:13:52.316 "flush": true, 00:13:52.316 "reset": true, 00:13:52.316 "compare": true, 00:13:52.316 "compare_and_write": false, 00:13:52.316 "abort": true, 00:13:52.316 "nvme_admin": true, 00:13:52.316 "nvme_io": true 00:13:52.316 }, 00:13:52.316 "driver_specific": { 00:13:52.316 "nvme": [ 00:13:52.316 { 00:13:52.316 "pci_address": "0000:00:10.0", 00:13:52.316 "trid": { 00:13:52.316 "trtype": "PCIe", 00:13:52.316 "traddr": "0000:00:10.0" 00:13:52.316 }, 00:13:52.316 "ctrlr_data": { 00:13:52.316 "cntlid": 0, 00:13:52.316 "vendor_id": "0x1b36", 00:13:52.316 "model_number": "QEMU NVMe Ctrl", 00:13:52.316 "serial_number": "12340", 00:13:52.316 "firmware_revision": "8.0.0", 00:13:52.316 "subnqn": "nqn.2019-08.org.qemu:12340", 00:13:52.316 "oacs": { 00:13:52.316 "security": 0, 00:13:52.316 "format": 1, 00:13:52.316 "firmware": 0, 00:13:52.316 "ns_manage": 1 00:13:52.316 }, 00:13:52.316 "multi_ctrlr": false, 00:13:52.316 "ana_reporting": false 00:13:52.316 }, 00:13:52.316 "vs": { 00:13:52.316 "nvme_version": "1.4" 00:13:52.316 }, 00:13:52.316 "ns_data": { 00:13:52.316 "id": 1, 00:13:52.316 "can_share": false 00:13:52.316 } 00:13:52.316 } 00:13:52.316 ], 00:13:52.316 "mp_policy": "active_passive" 00:13:52.316 } 00:13:52.316 } 00:13:52.316 ] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.316 Nvme01n1 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.316 [ 00:13:52.316 { 00:13:52.316 "name": "Nvme01n1", 00:13:52.316 "aliases": [ 00:13:52.316 "5c3078fa-f4ae-4434-b778-09587864e380" 00:13:52.316 ], 00:13:52.316 "product_name": "NVMe disk", 00:13:52.316 "block_size": 4096, 00:13:52.316 "num_blocks": 1310720, 00:13:52.316 "uuid": "5c3078fa-f4ae-4434-b778-09587864e380", 00:13:52.316 "assigned_rate_limits": { 00:13:52.316 "rw_ios_per_sec": 0, 00:13:52.316 "rw_mbytes_per_sec": 0, 00:13:52.316 "r_mbytes_per_sec": 0, 00:13:52.316 "w_mbytes_per_sec": 0 00:13:52.316 }, 00:13:52.316 "claimed": false, 00:13:52.316 "zoned": false, 00:13:52.316 "supported_io_types": { 00:13:52.316 "read": true, 00:13:52.316 "write": true, 00:13:52.316 "unmap": true, 00:13:52.316 "write_zeroes": true, 00:13:52.316 "flush": true, 00:13:52.316 "reset": true, 00:13:52.316 "compare": true, 00:13:52.316 "compare_and_write": false, 00:13:52.316 "abort": true, 00:13:52.316 "nvme_admin": true, 00:13:52.316 "nvme_io": true 00:13:52.316 }, 00:13:52.316 "driver_specific": { 00:13:52.316 "nvme": [ 00:13:52.316 { 00:13:52.316 "pci_address": "0000:00:11.0", 00:13:52.316 "trid": { 00:13:52.316 "trtype": "PCIe", 00:13:52.316 "traddr": "0000:00:11.0" 00:13:52.316 }, 00:13:52.316 "ctrlr_data": { 00:13:52.316 "cntlid": 0, 00:13:52.316 "vendor_id": "0x1b36", 00:13:52.316 "model_number": "QEMU NVMe Ctrl", 00:13:52.316 "serial_number": "12341", 00:13:52.316 "firmware_revision": "8.0.0", 00:13:52.316 "subnqn": "nqn.2019-08.org.qemu:12341", 00:13:52.316 "oacs": { 00:13:52.316 "security": 0, 00:13:52.316 "format": 1, 00:13:52.316 "firmware": 0, 00:13:52.316 "ns_manage": 1 00:13:52.316 }, 00:13:52.316 "multi_ctrlr": false, 00:13:52.316 "ana_reporting": false 00:13:52.316 }, 00:13:52.316 "vs": { 00:13:52.316 "nvme_version": "1.4" 00:13:52.316 }, 00:13:52.316 "ns_data": { 00:13:52.316 "id": 1, 00:13:52.316 "can_share": false 00:13:52.316 } 00:13:52.316 } 00:13:52.316 ], 00:13:52.316 "mp_policy": "active_passive" 00:13:52.316 } 00:13:52.316 } 00:13:52.316 ] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:13:52.316 18:05:44 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:13:52.316 18:05:44 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:13:58.933 18:05:50 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:58.933 18:05:50 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:58.933 18:05:50 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:58.933 18:05:50 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:58.933 18:05:50 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:58.933 18:05:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:13:58.933 18:05:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:13:58.933 [2024-05-15 18:05:50.809769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:58.933 [2024-05-15 18:05:50.812623] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.933 [2024-05-15 18:05:50.812744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.933 [2024-05-15 18:05:50.812769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.933 [2024-05-15 18:05:50.812802] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.933 [2024-05-15 18:05:50.812819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.934 [2024-05-15 18:05:50.812836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.934 [2024-05-15 18:05:50.812851] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.934 [2024-05-15 18:05:50.812867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.934 [2024-05-15 18:05:50.812881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.934 [2024-05-15 18:05:50.812898] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.934 [2024-05-15 18:05:50.812911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.934 [2024-05-15 18:05:50.812932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.934 [2024-05-15 18:05:51.209771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:58.934 [2024-05-15 18:05:51.212746] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.934 [2024-05-15 18:05:51.212830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.934 [2024-05-15 18:05:51.212855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.934 [2024-05-15 18:05:51.212885] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.934 [2024-05-15 18:05:51.212903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.934 [2024-05-15 18:05:51.212918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.934 [2024-05-15 18:05:51.212935] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.934 [2024-05-15 18:05:51.212948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.934 [2024-05-15 18:05:51.212964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.934 [2024-05-15 18:05:51.212978] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:58.934 [2024-05-15 18:05:51.213010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:58.934 [2024-05-15 18:05:51.213024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.10 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.10 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.10 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.10 2 00:14:05.502 remove_attach_helper took 12.10s to complete (handling 2 nvme drive(s)) 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:14:05.502 18:05:56 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:14:05.502 18:05:56 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:14:10.796 18:06:02 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:14:10.796 18:06:02 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:14:10.796 18:06:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:14:17.360 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.360 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.360 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:14:17.360 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.06 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.06 00:14:17.360 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.06 00:14:17.360 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.06 2 00:14:17.360 remove_attach_helper took 12.06s to complete (handling 2 nvme drive(s)) 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:14:17.360 18:06:08 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 72732 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 72732 ']' 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 72732 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:14:17.360 18:06:08 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:17.361 18:06:08 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72732 00:14:17.361 killing process with pid 72732 00:14:17.361 18:06:08 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:17.361 18:06:08 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:17.361 18:06:08 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72732' 00:14:17.361 18:06:08 sw_hotplug -- common/autotest_common.sh@965 -- # kill 72732 00:14:17.361 18:06:08 sw_hotplug -- common/autotest_common.sh@970 -- # wait 72732 00:14:18.738 00:14:18.738 real 1m17.913s 00:14:18.738 user 0m47.052s 00:14:18.738 sys 0m13.894s 00:14:18.738 ************************************ 00:14:18.738 END TEST sw_hotplug 00:14:18.738 ************************************ 00:14:18.738 18:06:11 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:18.738 18:06:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.738 18:06:11 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:18.738 18:06:11 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:18.738 18:06:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:18.738 18:06:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.738 18:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:18.738 ************************************ 00:14:18.738 START TEST nvme_xnvme 00:14:18.738 ************************************ 00:14:18.738 18:06:11 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:18.998 * Looking for test storage... 00:14:18.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:18.998 18:06:11 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.998 18:06:11 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.998 18:06:11 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.998 18:06:11 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.998 18:06:11 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.998 18:06:11 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.998 18:06:11 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.998 18:06:11 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:18.998 18:06:11 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.998 18:06:11 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:18.998 18:06:11 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:18.998 18:06:11 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.998 18:06:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.998 ************************************ 00:14:18.998 START TEST xnvme_to_malloc_dd_copy 00:14:18.998 ************************************ 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:18.998 18:06:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:18.998 { 00:14:18.998 "subsystems": [ 00:14:18.998 { 00:14:18.998 "subsystem": "bdev", 00:14:18.998 "config": [ 00:14:18.998 { 00:14:18.998 "params": { 00:14:18.998 "block_size": 512, 00:14:18.998 "num_blocks": 2097152, 00:14:18.998 "name": "malloc0" 00:14:18.998 }, 00:14:18.998 "method": "bdev_malloc_create" 00:14:18.998 }, 00:14:18.998 { 00:14:18.998 "params": { 00:14:18.998 "io_mechanism": "libaio", 00:14:18.998 "filename": "/dev/nullb0", 00:14:18.998 "name": "null0" 00:14:18.998 }, 00:14:18.998 "method": "bdev_xnvme_create" 00:14:18.998 }, 00:14:18.998 { 00:14:18.998 "method": "bdev_wait_for_examine" 00:14:18.998 } 00:14:18.998 ] 00:14:18.998 } 00:14:18.998 ] 00:14:18.998 } 00:14:18.998 [2024-05-15 18:06:11.418539] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:14:18.998 [2024-05-15 18:06:11.418688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73094 ] 00:14:19.257 [2024-05-15 18:06:11.582114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.516 [2024-05-15 18:06:11.821617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.547  Copying: 165/1024 [MB] (165 MBps) Copying: 330/1024 [MB] (165 MBps) Copying: 495/1024 [MB] (164 MBps) Copying: 642/1024 [MB] (147 MBps) Copying: 798/1024 [MB] (155 MBps) Copying: 959/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 159 MBps) 00:14:31.547 00:14:31.547 18:06:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:31.547 18:06:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:31.547 18:06:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:31.547 18:06:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:31.547 { 00:14:31.547 "subsystems": [ 00:14:31.547 { 00:14:31.547 "subsystem": "bdev", 00:14:31.547 "config": [ 00:14:31.547 { 00:14:31.547 "params": { 00:14:31.547 "block_size": 512, 00:14:31.547 "num_blocks": 2097152, 00:14:31.547 "name": "malloc0" 00:14:31.547 }, 00:14:31.547 "method": "bdev_malloc_create" 00:14:31.547 }, 00:14:31.547 { 00:14:31.547 "params": { 00:14:31.547 "io_mechanism": "libaio", 00:14:31.547 "filename": "/dev/nullb0", 00:14:31.547 "name": "null0" 00:14:31.547 }, 00:14:31.547 "method": "bdev_xnvme_create" 00:14:31.547 }, 00:14:31.547 { 00:14:31.547 "method": "bdev_wait_for_examine" 00:14:31.547 } 00:14:31.547 ] 00:14:31.547 } 00:14:31.547 ] 00:14:31.547 } 00:14:31.547 [2024-05-15 18:06:23.720514] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:14:31.547 [2024-05-15 18:06:23.720677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73232 ] 00:14:31.547 [2024-05-15 18:06:23.891952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.805 [2024-05-15 18:06:24.163617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.814  Copying: 162/1024 [MB] (162 MBps) Copying: 328/1024 [MB] (166 MBps) Copying: 495/1024 [MB] (166 MBps) Copying: 661/1024 [MB] (166 MBps) Copying: 827/1024 [MB] (165 MBps) Copying: 989/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:14:43.814 00:14:43.814 18:06:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:43.814 18:06:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:43.814 18:06:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:43.814 18:06:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:43.814 18:06:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:43.814 18:06:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:43.814 { 00:14:43.814 "subsystems": [ 00:14:43.814 { 00:14:43.814 "subsystem": "bdev", 00:14:43.814 "config": [ 00:14:43.814 { 00:14:43.814 "params": { 00:14:43.814 "block_size": 512, 00:14:43.814 "num_blocks": 2097152, 00:14:43.814 "name": "malloc0" 00:14:43.814 }, 00:14:43.814 "method": "bdev_malloc_create" 00:14:43.814 }, 00:14:43.814 { 00:14:43.814 "params": { 00:14:43.814 "io_mechanism": "io_uring", 00:14:43.814 "filename": "/dev/nullb0", 00:14:43.814 "name": "null0" 00:14:43.814 }, 00:14:43.814 "method": "bdev_xnvme_create" 00:14:43.814 }, 00:14:43.814 { 00:14:43.814 "method": "bdev_wait_for_examine" 00:14:43.814 } 00:14:43.814 ] 00:14:43.814 } 00:14:43.814 ] 00:14:43.814 } 00:14:43.814 [2024-05-15 18:06:35.650487] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:14:43.814 [2024-05-15 18:06:35.650749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73364 ] 00:14:43.814 [2024-05-15 18:06:35.823486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.814 [2024-05-15 18:06:36.075193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.340  Copying: 166/1024 [MB] (166 MBps) Copying: 333/1024 [MB] (166 MBps) Copying: 502/1024 [MB] (169 MBps) Copying: 670/1024 [MB] (167 MBps) Copying: 837/1024 [MB] (166 MBps) Copying: 1003/1024 [MB] (166 MBps) Copying: 1024/1024 [MB] (average 167 MBps) 00:14:55.340 00:14:55.340 18:06:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:55.340 18:06:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:55.340 18:06:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:55.340 18:06:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:55.340 { 00:14:55.340 "subsystems": [ 00:14:55.340 { 00:14:55.340 "subsystem": "bdev", 00:14:55.340 "config": [ 00:14:55.340 { 00:14:55.340 "params": { 00:14:55.340 "block_size": 512, 00:14:55.340 "num_blocks": 2097152, 00:14:55.340 "name": "malloc0" 00:14:55.340 }, 00:14:55.340 "method": "bdev_malloc_create" 00:14:55.340 }, 00:14:55.340 { 00:14:55.340 "params": { 00:14:55.340 "io_mechanism": "io_uring", 00:14:55.340 "filename": "/dev/nullb0", 00:14:55.340 "name": "null0" 00:14:55.340 }, 00:14:55.340 "method": "bdev_xnvme_create" 00:14:55.340 }, 00:14:55.340 { 00:14:55.340 "method": "bdev_wait_for_examine" 00:14:55.340 } 00:14:55.340 ] 00:14:55.340 } 00:14:55.340 ] 00:14:55.340 } 00:14:55.340 [2024-05-15 18:06:47.368389] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:14:55.341 [2024-05-15 18:06:47.368575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73496 ] 00:14:55.341 [2024-05-15 18:06:47.535814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.341 [2024-05-15 18:06:47.777523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.491  Copying: 196/1024 [MB] (196 MBps) Copying: 381/1024 [MB] (184 MBps) Copying: 561/1024 [MB] (179 MBps) Copying: 740/1024 [MB] (179 MBps) Copying: 922/1024 [MB] (181 MBps) Copying: 1024/1024 [MB] (average 184 MBps) 00:15:06.491 00:15:06.491 18:06:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:06.491 18:06:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:15:06.491 00:15:06.491 real 0m47.160s 00:15:06.491 user 0m40.855s 00:15:06.491 sys 0m5.640s 00:15:06.491 18:06:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.491 ************************************ 00:15:06.491 END TEST xnvme_to_malloc_dd_copy 00:15:06.491 ************************************ 00:15:06.491 18:06:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:06.491 18:06:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:06.491 18:06:58 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:06.491 18:06:58 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:06.491 18:06:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.491 ************************************ 00:15:06.491 START TEST xnvme_bdevperf 00:15:06.491 ************************************ 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:06.492 18:06:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:06.492 { 00:15:06.492 "subsystems": [ 00:15:06.492 { 00:15:06.492 "subsystem": "bdev", 00:15:06.492 "config": [ 00:15:06.492 { 00:15:06.492 "params": { 00:15:06.492 "io_mechanism": "libaio", 00:15:06.492 "filename": "/dev/nullb0", 00:15:06.492 "name": "null0" 00:15:06.492 }, 00:15:06.492 "method": "bdev_xnvme_create" 00:15:06.492 }, 00:15:06.492 { 00:15:06.492 "method": "bdev_wait_for_examine" 00:15:06.492 } 00:15:06.492 ] 00:15:06.492 } 00:15:06.492 ] 00:15:06.492 } 00:15:06.492 [2024-05-15 18:06:58.630675] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:06.492 [2024-05-15 18:06:58.630822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73645 ] 00:15:06.492 [2024-05-15 18:06:58.795602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.750 [2024-05-15 18:06:59.083503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.009 Running I/O for 5 seconds... 00:15:12.276 00:15:12.276 Latency(us) 00:15:12.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.276 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:12.276 null0 : 5.00 109221.28 426.65 0.00 0.00 582.05 151.74 1064.96 00:15:12.276 =================================================================================================================== 00:15:12.276 Total : 109221.28 426.65 0.00 0.00 582.05 151.74 1064.96 00:15:13.210 18:07:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:13.210 18:07:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:13.210 18:07:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:13.210 18:07:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:13.210 18:07:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:13.210 18:07:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:13.469 { 00:15:13.469 "subsystems": [ 00:15:13.469 { 00:15:13.469 "subsystem": "bdev", 00:15:13.469 "config": [ 00:15:13.469 { 00:15:13.469 "params": { 00:15:13.469 "io_mechanism": "io_uring", 00:15:13.469 "filename": "/dev/nullb0", 00:15:13.469 "name": "null0" 00:15:13.469 }, 00:15:13.469 "method": "bdev_xnvme_create" 00:15:13.469 }, 00:15:13.469 { 00:15:13.469 "method": "bdev_wait_for_examine" 00:15:13.469 } 00:15:13.469 ] 00:15:13.469 } 00:15:13.469 ] 00:15:13.469 } 00:15:13.469 [2024-05-15 18:07:05.741261] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:13.469 [2024-05-15 18:07:05.741450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73725 ] 00:15:13.469 [2024-05-15 18:07:05.907621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.728 [2024-05-15 18:07:06.154802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.295 Running I/O for 5 seconds... 00:15:19.559 00:15:19.559 Latency(us) 00:15:19.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.559 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:19.559 null0 : 5.00 145936.22 570.06 0.00 0.00 434.87 226.21 752.17 00:15:19.559 =================================================================================================================== 00:15:19.559 Total : 145936.22 570.06 0.00 0.00 434.87 226.21 752.17 00:15:20.493 18:07:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:20.493 18:07:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:15:20.493 00:15:20.493 real 0m14.296s 00:15:20.493 user 0m11.352s 00:15:20.493 sys 0m2.731s 00:15:20.493 18:07:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:20.493 18:07:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:20.493 ************************************ 00:15:20.493 END TEST xnvme_bdevperf 00:15:20.493 ************************************ 00:15:20.493 00:15:20.493 real 1m1.640s 00:15:20.493 user 0m52.270s 00:15:20.493 sys 0m8.490s 00:15:20.493 18:07:12 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:20.493 18:07:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.494 ************************************ 00:15:20.494 END TEST nvme_xnvme 00:15:20.494 ************************************ 00:15:20.494 18:07:12 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:20.494 18:07:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:20.494 18:07:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:20.494 18:07:12 -- common/autotest_common.sh@10 -- # set +x 00:15:20.494 ************************************ 00:15:20.494 START TEST blockdev_xnvme 00:15:20.494 ************************************ 00:15:20.494 18:07:12 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:20.494 * Looking for test storage... 00:15:20.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73865 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73865 00:15:20.494 18:07:12 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 73865 ']' 00:15:20.494 18:07:12 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:20.494 18:07:12 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.494 18:07:12 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:20.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.494 18:07:12 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.494 18:07:12 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:20.494 18:07:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 [2024-05-15 18:07:13.100273] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:20.752 [2024-05-15 18:07:13.100483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73865 ] 00:15:21.010 [2024-05-15 18:07:13.264714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.269 [2024-05-15 18:07:13.514229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.863 18:07:14 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:21.863 18:07:14 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:15:21.863 18:07:14 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:15:21.863 18:07:14 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:15:21.863 18:07:14 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:21.863 18:07:14 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:21.863 18:07:14 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:22.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:22.379 Waiting for block devices as requested 00:15:22.379 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.379 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:27.649 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:27.649 18:07:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:27.649 18:07:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 nvme0n1 00:15:27.649 nvme0n2 00:15:27.649 nvme0n3 00:15:27.649 nvme1n1 00:15:27.649 nvme2n1 00:15:27.649 nvme3n1 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:15:27.649 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.649 18:07:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.908 18:07:20 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.908 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:15:27.908 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b449f31a-1354-4ae6-bbb8-e03c7798d77e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b449f31a-1354-4ae6-bbb8-e03c7798d77e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "11ead980-0825-4a0d-b91c-afa1ab5ee445"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11ead980-0825-4a0d-b91c-afa1ab5ee445",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f4659cef-db04-451f-9943-9da73cf5d7b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4659cef-db04-451f-9943-9da73cf5d7b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c7be9af2-fc69-4afb-952e-c9a9521c188d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c7be9af2-fc69-4afb-952e-c9a9521c188d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "71eaef14-3861-473b-946b-245f9fdbcaeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "71eaef14-3861-473b-946b-245f9fdbcaeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c27ed7f2-7ae4-4edb-b5cb-03b7247793a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c27ed7f2-7ae4-4edb-b5cb-03b7247793a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:15:27.908 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:15:27.908 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:15:27.908 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:15:27.908 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:15:27.908 18:07:20 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 73865 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 73865 ']' 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 73865 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73865 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73865' 00:15:27.909 killing process with pid 73865 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 73865 00:15:27.909 18:07:20 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 73865 00:15:30.440 18:07:22 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:30.440 18:07:22 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:30.440 18:07:22 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:15:30.440 18:07:22 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:30.440 18:07:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:30.440 ************************************ 00:15:30.440 START TEST bdev_hello_world 00:15:30.440 ************************************ 00:15:30.440 18:07:22 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:30.440 [2024-05-15 18:07:22.506711] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:30.440 [2024-05-15 18:07:22.506847] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74146 ] 00:15:30.440 [2024-05-15 18:07:22.672473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.440 [2024-05-15 18:07:22.921171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.006 [2024-05-15 18:07:23.350554] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:31.007 [2024-05-15 18:07:23.350629] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:31.007 [2024-05-15 18:07:23.350669] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:31.007 [2024-05-15 18:07:23.353168] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:31.007 [2024-05-15 18:07:23.353641] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:31.007 [2024-05-15 18:07:23.353679] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:31.007 [2024-05-15 18:07:23.353910] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:31.007 00:15:31.007 [2024-05-15 18:07:23.353954] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:32.384 00:15:32.384 real 0m2.054s 00:15:32.384 user 0m1.672s 00:15:32.384 sys 0m0.267s 00:15:32.384 18:07:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:32.384 ************************************ 00:15:32.384 END TEST bdev_hello_world 00:15:32.384 ************************************ 00:15:32.384 18:07:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:32.384 18:07:24 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:15:32.384 18:07:24 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:32.384 18:07:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:32.384 18:07:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.384 ************************************ 00:15:32.384 START TEST bdev_bounds 00:15:32.384 ************************************ 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=74188 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 74188' 00:15:32.384 Process bdevio pid: 74188 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 74188 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 74188 ']' 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.384 18:07:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:32.384 [2024-05-15 18:07:24.625960] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:32.384 [2024-05-15 18:07:24.626194] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74188 ] 00:15:32.384 [2024-05-15 18:07:24.803678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:32.642 [2024-05-15 18:07:25.047862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.642 [2024-05-15 18:07:25.048041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.642 [2024-05-15 18:07:25.048063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.209 18:07:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:33.209 18:07:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:15:33.209 18:07:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:33.209 I/O targets: 00:15:33.209 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:33.209 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:33.209 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:33.209 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:33.209 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:33.209 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:33.209 00:15:33.209 00:15:33.209 CUnit - A unit testing framework for C - Version 2.1-3 00:15:33.209 http://cunit.sourceforge.net/ 00:15:33.209 00:15:33.209 00:15:33.209 Suite: bdevio tests on: nvme3n1 00:15:33.209 Test: blockdev write read block ...passed 00:15:33.209 Test: blockdev write zeroes read block ...passed 00:15:33.209 Test: blockdev write zeroes read no split ...passed 00:15:33.209 Test: blockdev write zeroes read split ...passed 00:15:33.468 Test: blockdev write zeroes read split partial ...passed 00:15:33.468 Test: blockdev reset ...passed 00:15:33.468 Test: blockdev write read 8 blocks ...passed 00:15:33.468 Test: blockdev write read size > 128k ...passed 00:15:33.468 Test: blockdev write read invalid size ...passed 00:15:33.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.468 Test: blockdev write read max offset ...passed 00:15:33.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:33.468 Test: blockdev writev readv 8 blocks ...passed 00:15:33.468 Test: blockdev writev readv 30 x 1block ...passed 00:15:33.468 Test: blockdev writev readv block ...passed 00:15:33.468 Test: blockdev writev readv size > 128k ...passed 00:15:33.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:33.468 Test: blockdev comparev and writev ...passed 00:15:33.468 Test: blockdev nvme passthru rw ...passed 00:15:33.468 Test: blockdev nvme passthru vendor specific ...passed 00:15:33.468 Test: blockdev nvme admin passthru ...passed 00:15:33.468 Test: blockdev copy ...passed 00:15:33.468 Suite: bdevio tests on: nvme2n1 00:15:33.468 Test: blockdev write read block ...passed 00:15:33.468 Test: blockdev write zeroes read block ...passed 00:15:33.468 Test: blockdev write zeroes read no split ...passed 00:15:33.468 Test: blockdev write zeroes read split ...passed 00:15:33.468 Test: blockdev write zeroes read split partial ...passed 00:15:33.468 Test: blockdev reset ...passed 00:15:33.468 Test: blockdev write read 8 blocks ...passed 00:15:33.468 Test: blockdev write read size > 128k ...passed 00:15:33.468 Test: blockdev write read invalid size ...passed 00:15:33.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.468 Test: blockdev write read max offset ...passed 00:15:33.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:33.468 Test: blockdev writev readv 8 blocks ...passed 00:15:33.468 Test: blockdev writev readv 30 x 1block ...passed 00:15:33.468 Test: blockdev writev readv block ...passed 00:15:33.468 Test: blockdev writev readv size > 128k ...passed 00:15:33.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:33.468 Test: blockdev comparev and writev ...passed 00:15:33.468 Test: blockdev nvme passthru rw ...passed 00:15:33.468 Test: blockdev nvme passthru vendor specific ...passed 00:15:33.468 Test: blockdev nvme admin passthru ...passed 00:15:33.468 Test: blockdev copy ...passed 00:15:33.468 Suite: bdevio tests on: nvme1n1 00:15:33.468 Test: blockdev write read block ...passed 00:15:33.468 Test: blockdev write zeroes read block ...passed 00:15:33.468 Test: blockdev write zeroes read no split ...passed 00:15:33.468 Test: blockdev write zeroes read split ...passed 00:15:33.468 Test: blockdev write zeroes read split partial ...passed 00:15:33.468 Test: blockdev reset ...passed 00:15:33.468 Test: blockdev write read 8 blocks ...passed 00:15:33.468 Test: blockdev write read size > 128k ...passed 00:15:33.469 Test: blockdev write read invalid size ...passed 00:15:33.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.469 Test: blockdev write read max offset ...passed 00:15:33.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:33.469 Test: blockdev writev readv 8 blocks ...passed 00:15:33.469 Test: blockdev writev readv 30 x 1block ...passed 00:15:33.469 Test: blockdev writev readv block ...passed 00:15:33.469 Test: blockdev writev readv size > 128k ...passed 00:15:33.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:33.469 Test: blockdev comparev and writev ...passed 00:15:33.469 Test: blockdev nvme passthru rw ...passed 00:15:33.469 Test: blockdev nvme passthru vendor specific ...passed 00:15:33.469 Test: blockdev nvme admin passthru ...passed 00:15:33.469 Test: blockdev copy ...passed 00:15:33.469 Suite: bdevio tests on: nvme0n3 00:15:33.469 Test: blockdev write read block ...passed 00:15:33.469 Test: blockdev write zeroes read block ...passed 00:15:33.469 Test: blockdev write zeroes read no split ...passed 00:15:33.469 Test: blockdev write zeroes read split ...passed 00:15:33.469 Test: blockdev write zeroes read split partial ...passed 00:15:33.469 Test: blockdev reset ...passed 00:15:33.469 Test: blockdev write read 8 blocks ...passed 00:15:33.469 Test: blockdev write read size > 128k ...passed 00:15:33.469 Test: blockdev write read invalid size ...passed 00:15:33.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.469 Test: blockdev write read max offset ...passed 00:15:33.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:33.469 Test: blockdev writev readv 8 blocks ...passed 00:15:33.469 Test: blockdev writev readv 30 x 1block ...passed 00:15:33.469 Test: blockdev writev readv block ...passed 00:15:33.469 Test: blockdev writev readv size > 128k ...passed 00:15:33.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:33.727 Test: blockdev comparev and writev ...passed 00:15:33.727 Test: blockdev nvme passthru rw ...passed 00:15:33.727 Test: blockdev nvme passthru vendor specific ...passed 00:15:33.727 Test: blockdev nvme admin passthru ...passed 00:15:33.727 Test: blockdev copy ...passed 00:15:33.727 Suite: bdevio tests on: nvme0n2 00:15:33.727 Test: blockdev write read block ...passed 00:15:33.727 Test: blockdev write zeroes read block ...passed 00:15:33.727 Test: blockdev write zeroes read no split ...passed 00:15:33.727 Test: blockdev write zeroes read split ...passed 00:15:33.727 Test: blockdev write zeroes read split partial ...passed 00:15:33.727 Test: blockdev reset ...passed 00:15:33.727 Test: blockdev write read 8 blocks ...passed 00:15:33.727 Test: blockdev write read size > 128k ...passed 00:15:33.727 Test: blockdev write read invalid size ...passed 00:15:33.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.727 Test: blockdev write read max offset ...passed 00:15:33.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:33.727 Test: blockdev writev readv 8 blocks ...passed 00:15:33.727 Test: blockdev writev readv 30 x 1block ...passed 00:15:33.727 Test: blockdev writev readv block ...passed 00:15:33.727 Test: blockdev writev readv size > 128k ...passed 00:15:33.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:33.727 Test: blockdev comparev and writev ...passed 00:15:33.727 Test: blockdev nvme passthru rw ...passed 00:15:33.727 Test: blockdev nvme passthru vendor specific ...passed 00:15:33.727 Test: blockdev nvme admin passthru ...passed 00:15:33.727 Test: blockdev copy ...passed 00:15:33.727 Suite: bdevio tests on: nvme0n1 00:15:33.727 Test: blockdev write read block ...passed 00:15:33.727 Test: blockdev write zeroes read block ...passed 00:15:33.727 Test: blockdev write zeroes read no split ...passed 00:15:33.727 Test: blockdev write zeroes read split ...passed 00:15:33.727 Test: blockdev write zeroes read split partial ...passed 00:15:33.727 Test: blockdev reset ...passed 00:15:33.727 Test: blockdev write read 8 blocks ...passed 00:15:33.727 Test: blockdev write read size > 128k ...passed 00:15:33.727 Test: blockdev write read invalid size ...passed 00:15:33.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.727 Test: blockdev write read max offset ...passed 00:15:33.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:33.727 Test: blockdev writev readv 8 blocks ...passed 00:15:33.727 Test: blockdev writev readv 30 x 1block ...passed 00:15:33.727 Test: blockdev writev readv block ...passed 00:15:33.727 Test: blockdev writev readv size > 128k ...passed 00:15:33.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:33.727 Test: blockdev comparev and writev ...passed 00:15:33.727 Test: blockdev nvme passthru rw ...passed 00:15:33.727 Test: blockdev nvme passthru vendor specific ...passed 00:15:33.727 Test: blockdev nvme admin passthru ...passed 00:15:33.727 Test: blockdev copy ...passed 00:15:33.727 00:15:33.727 Run Summary: Type Total Ran Passed Failed Inactive 00:15:33.727 suites 6 6 n/a 0 0 00:15:33.727 tests 138 138 138 0 0 00:15:33.727 asserts 780 780 780 0 n/a 00:15:33.727 00:15:33.727 Elapsed time = 1.346 seconds 00:15:33.727 0 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 74188 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 74188 ']' 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 74188 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74188 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:33.727 killing process with pid 74188 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74188' 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 74188 00:15:33.727 18:07:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 74188 00:15:35.102 18:07:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:15:35.102 00:15:35.102 real 0m2.764s 00:15:35.102 user 0m6.415s 00:15:35.102 sys 0m0.442s 00:15:35.102 18:07:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:35.102 18:07:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:35.102 ************************************ 00:15:35.102 END TEST bdev_bounds 00:15:35.102 ************************************ 00:15:35.102 18:07:27 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:35.102 18:07:27 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:35.102 18:07:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:35.103 18:07:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.103 ************************************ 00:15:35.103 START TEST bdev_nbd 00:15:35.103 ************************************ 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=74257 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 74257 /var/tmp/spdk-nbd.sock 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 74257 ']' 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:35.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:35.103 18:07:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:35.103 [2024-05-15 18:07:27.434129] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:35.103 [2024-05-15 18:07:27.434967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.103 [2024-05-15 18:07:27.601093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.363 [2024-05-15 18:07:27.830158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:35.930 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.188 1+0 records in 00:15:36.188 1+0 records out 00:15:36.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565649 s, 7.2 MB/s 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:36.188 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.446 1+0 records in 00:15:36.446 1+0 records out 00:15:36.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587792 s, 7.0 MB/s 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:36.446 18:07:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.705 1+0 records in 00:15:36.705 1+0 records out 00:15:36.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613245 s, 6.7 MB/s 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:36.705 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:36.963 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:36.963 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.221 1+0 records in 00:15:37.221 1+0 records out 00:15:37.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000901605 s, 4.5 MB/s 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:37.221 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.480 1+0 records in 00:15:37.480 1+0 records out 00:15:37.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969953 s, 4.2 MB/s 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.480 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:37.739 18:07:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.739 1+0 records in 00:15:37.739 1+0 records out 00:15:37.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00313781 s, 1.3 MB/s 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:37.739 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd0", 00:15:37.998 "bdev_name": "nvme0n1" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd1", 00:15:37.998 "bdev_name": "nvme0n2" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd2", 00:15:37.998 "bdev_name": "nvme0n3" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd3", 00:15:37.998 "bdev_name": "nvme1n1" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd4", 00:15:37.998 "bdev_name": "nvme2n1" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd5", 00:15:37.998 "bdev_name": "nvme3n1" 00:15:37.998 } 00:15:37.998 ]' 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd0", 00:15:37.998 "bdev_name": "nvme0n1" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd1", 00:15:37.998 "bdev_name": "nvme0n2" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd2", 00:15:37.998 "bdev_name": "nvme0n3" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd3", 00:15:37.998 "bdev_name": "nvme1n1" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd4", 00:15:37.998 "bdev_name": "nvme2n1" 00:15:37.998 }, 00:15:37.998 { 00:15:37.998 "nbd_device": "/dev/nbd5", 00:15:37.998 "bdev_name": "nvme3n1" 00:15:37.998 } 00:15:37.998 ]' 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.998 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.317 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:38.578 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.578 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.578 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.578 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.578 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.578 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.579 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:38.579 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.579 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.579 18:07:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.579 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:38.837 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:38.837 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:38.838 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:38.838 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.838 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.838 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:39.097 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.097 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.097 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.097 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.357 18:07:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:39.616 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:39.616 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:39.616 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:39.875 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:40.135 /dev/nbd0 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.135 1+0 records in 00:15:40.135 1+0 records out 00:15:40.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703336 s, 5.8 MB/s 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.135 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:40.394 /dev/nbd1 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.394 1+0 records in 00:15:40.394 1+0 records out 00:15:40.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510377 s, 8.0 MB/s 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.394 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:40.653 /dev/nbd10 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.653 1+0 records in 00:15:40.653 1+0 records out 00:15:40.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813407 s, 5.0 MB/s 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.653 18:07:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:40.912 /dev/nbd11 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.912 1+0 records in 00:15:40.912 1+0 records out 00:15:40.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626843 s, 6.5 MB/s 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:40.912 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:41.171 /dev/nbd12 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.171 1+0 records in 00:15:41.171 1+0 records out 00:15:41.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010634 s, 3.9 MB/s 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.171 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:41.431 /dev/nbd13 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.431 1+0 records in 00:15:41.431 1+0 records out 00:15:41.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702447 s, 5.8 MB/s 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.431 18:07:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:41.688 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:41.688 { 00:15:41.688 "nbd_device": "/dev/nbd0", 00:15:41.688 "bdev_name": "nvme0n1" 00:15:41.688 }, 00:15:41.688 { 00:15:41.688 "nbd_device": "/dev/nbd1", 00:15:41.688 "bdev_name": "nvme0n2" 00:15:41.688 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd10", 00:15:41.689 "bdev_name": "nvme0n3" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd11", 00:15:41.689 "bdev_name": "nvme1n1" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd12", 00:15:41.689 "bdev_name": "nvme2n1" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd13", 00:15:41.689 "bdev_name": "nvme3n1" 00:15:41.689 } 00:15:41.689 ]' 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd0", 00:15:41.689 "bdev_name": "nvme0n1" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd1", 00:15:41.689 "bdev_name": "nvme0n2" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd10", 00:15:41.689 "bdev_name": "nvme0n3" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd11", 00:15:41.689 "bdev_name": "nvme1n1" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd12", 00:15:41.689 "bdev_name": "nvme2n1" 00:15:41.689 }, 00:15:41.689 { 00:15:41.689 "nbd_device": "/dev/nbd13", 00:15:41.689 "bdev_name": "nvme3n1" 00:15:41.689 } 00:15:41.689 ]' 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:41.689 /dev/nbd1 00:15:41.689 /dev/nbd10 00:15:41.689 /dev/nbd11 00:15:41.689 /dev/nbd12 00:15:41.689 /dev/nbd13' 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:41.689 /dev/nbd1 00:15:41.689 /dev/nbd10 00:15:41.689 /dev/nbd11 00:15:41.689 /dev/nbd12 00:15:41.689 /dev/nbd13' 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:41.689 256+0 records in 00:15:41.689 256+0 records out 00:15:41.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619341 s, 169 MB/s 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:41.689 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:41.947 256+0 records in 00:15:41.947 256+0 records out 00:15:41.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129209 s, 8.1 MB/s 00:15:41.947 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:41.947 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:42.206 256+0 records in 00:15:42.206 256+0 records out 00:15:42.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130321 s, 8.0 MB/s 00:15:42.206 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.206 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:42.206 256+0 records in 00:15:42.206 256+0 records out 00:15:42.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152908 s, 6.9 MB/s 00:15:42.206 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.206 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:42.464 256+0 records in 00:15:42.464 256+0 records out 00:15:42.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14999 s, 7.0 MB/s 00:15:42.464 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.464 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:42.464 256+0 records in 00:15:42.464 256+0 records out 00:15:42.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153614 s, 6.8 MB/s 00:15:42.464 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.464 18:07:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:42.724 256+0 records in 00:15:42.724 256+0 records out 00:15:42.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132818 s, 7.9 MB/s 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.724 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:42.982 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.982 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.982 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.982 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.982 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.982 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.983 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.983 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.983 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.983 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.241 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.500 18:07:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.067 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.635 18:07:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:44.894 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:45.153 malloc_lvol_verify 00:15:45.153 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:45.412 e2098915-ee2c-43c9-8b6e-499a943b272b 00:15:45.412 18:07:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:45.670 cad1f02d-6ec9-47dd-9ee7-355ed0861baf 00:15:45.670 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:45.929 /dev/nbd0 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:45.929 mke2fs 1.46.5 (30-Dec-2021) 00:15:45.929 Discarding device blocks: 0/4096 done 00:15:45.929 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:45.929 00:15:45.929 Allocating group tables: 0/1 done 00:15:45.929 Writing inode tables: 0/1 done 00:15:45.929 Creating journal (1024 blocks): done 00:15:45.929 Writing superblocks and filesystem accounting information: 0/1 done 00:15:45.929 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.929 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 74257 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 74257 ']' 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 74257 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74257 00:15:46.188 killing process with pid 74257 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74257' 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 74257 00:15:46.188 18:07:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 74257 00:15:47.567 ************************************ 00:15:47.567 END TEST bdev_nbd 00:15:47.567 ************************************ 00:15:47.567 18:07:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:15:47.567 00:15:47.567 real 0m12.577s 00:15:47.567 user 0m17.718s 00:15:47.567 sys 0m4.127s 00:15:47.567 18:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:47.567 18:07:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:47.567 18:07:39 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:15:47.567 18:07:39 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:15:47.567 18:07:39 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:15:47.567 18:07:39 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:15:47.567 18:07:39 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:47.567 18:07:39 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:47.567 18:07:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.567 ************************************ 00:15:47.567 START TEST bdev_fio 00:15:47.567 ************************************ 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:47.567 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:15:47.567 18:07:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n2]' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n2 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n3]' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n3 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:47.567 ************************************ 00:15:47.567 START TEST bdev_fio_rw_verify 00:15:47.567 ************************************ 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:47.567 18:07:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:47.826 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.826 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.826 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.826 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.826 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.826 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:47.826 fio-3.35 00:15:47.826 Starting 6 threads 00:16:00.037 00:16:00.037 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74679: Wed May 15 18:07:51 2024 00:16:00.037 read: IOPS=28.1k, BW=110MiB/s (115MB/s)(1097MiB/10001msec) 00:16:00.037 slat (usec): min=3, max=891, avg= 6.89, stdev= 4.47 00:16:00.037 clat (usec): min=99, max=4842, avg=683.61, stdev=236.58 00:16:00.037 lat (usec): min=105, max=4849, avg=690.50, stdev=237.19 00:16:00.037 clat percentiles (usec): 00:16:00.037 | 50.000th=[ 709], 99.000th=[ 1287], 99.900th=[ 1926], 99.990th=[ 3654], 00:16:00.037 | 99.999th=[ 4817] 00:16:00.037 write: IOPS=28.3k, BW=111MiB/s (116MB/s)(1107MiB/10001msec); 0 zone resets 00:16:00.037 slat (usec): min=13, max=3927, avg=25.21, stdev=27.36 00:16:00.037 clat (usec): min=82, max=4859, avg=751.90, stdev=238.43 00:16:00.037 lat (usec): min=96, max=4901, avg=777.11, stdev=240.46 00:16:00.037 clat percentiles (usec): 00:16:00.037 | 50.000th=[ 766], 99.000th=[ 1418], 99.900th=[ 1926], 99.990th=[ 4047], 00:16:00.037 | 99.999th=[ 4817] 00:16:00.037 bw ( KiB/s): min=94587, max=141238, per=99.74%, avg=113081.00, stdev=2292.01, samples=114 00:16:00.037 iops : min=23645, max=35309, avg=28269.68, stdev=573.02, samples=114 00:16:00.037 lat (usec) : 100=0.01%, 250=2.37%, 500=15.52%, 750=35.28%, 1000=38.17% 00:16:00.037 lat (msec) : 2=8.58%, 4=0.07%, 10=0.01% 00:16:00.037 cpu : usr=61.14%, sys=26.01%, ctx=7744, majf=0, minf=24029 00:16:00.037 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.037 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.037 issued rwts: total=280728,283468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.037 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:00.037 00:16:00.037 Run status group 0 (all jobs): 00:16:00.037 READ: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1097MiB (1150MB), run=10001-10001msec 00:16:00.037 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=1107MiB (1161MB), run=10001-10001msec 00:16:00.037 ----------------------------------------------------- 00:16:00.037 Suppressions used: 00:16:00.037 count bytes template 00:16:00.037 6 48 /usr/src/fio/parse.c 00:16:00.037 2555 245280 /usr/src/fio/iolog.c 00:16:00.037 1 8 libtcmalloc_minimal.so 00:16:00.037 1 904 libcrypto.so 00:16:00.037 ----------------------------------------------------- 00:16:00.037 00:16:00.296 00:16:00.296 real 0m12.501s 00:16:00.296 user 0m38.668s 00:16:00.296 sys 0m16.008s 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:00.296 ************************************ 00:16:00.296 END TEST bdev_fio_rw_verify 00:16:00.296 ************************************ 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b449f31a-1354-4ae6-bbb8-e03c7798d77e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b449f31a-1354-4ae6-bbb8-e03c7798d77e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "11ead980-0825-4a0d-b91c-afa1ab5ee445"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11ead980-0825-4a0d-b91c-afa1ab5ee445",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f4659cef-db04-451f-9943-9da73cf5d7b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4659cef-db04-451f-9943-9da73cf5d7b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c7be9af2-fc69-4afb-952e-c9a9521c188d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c7be9af2-fc69-4afb-952e-c9a9521c188d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "71eaef14-3861-473b-946b-245f9fdbcaeb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "71eaef14-3861-473b-946b-245f9fdbcaeb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c27ed7f2-7ae4-4edb-b5cb-03b7247793a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c27ed7f2-7ae4-4edb-b5cb-03b7247793a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.296 /home/vagrant/spdk_repo/spdk 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:16:00.296 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:16:00.297 18:07:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:16:00.297 00:16:00.297 real 0m12.679s 00:16:00.297 user 0m38.774s 00:16:00.297 sys 0m16.079s 00:16:00.297 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.297 18:07:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:00.297 ************************************ 00:16:00.297 END TEST bdev_fio 00:16:00.297 ************************************ 00:16:00.297 18:07:52 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:00.297 18:07:52 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:00.297 18:07:52 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:16:00.297 18:07:52 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.297 18:07:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.297 ************************************ 00:16:00.297 START TEST bdev_verify 00:16:00.297 ************************************ 00:16:00.297 18:07:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:00.297 [2024-05-15 18:07:52.785529] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:00.297 [2024-05-15 18:07:52.785728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74852 ] 00:16:00.563 [2024-05-15 18:07:52.957162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:00.821 [2024-05-15 18:07:53.252991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.821 [2024-05-15 18:07:53.253003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.388 Running I/O for 5 seconds... 00:16:06.655 00:16:06.655 Latency(us) 00:16:06.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.655 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x0 length 0x80000 00:16:06.655 nvme0n1 : 5.01 1711.16 6.68 0.00 0.00 74665.56 13941.29 75306.82 00:16:06.655 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x80000 length 0x80000 00:16:06.655 nvme0n1 : 5.03 1527.13 5.97 0.00 0.00 83652.37 14537.08 90082.21 00:16:06.655 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x0 length 0x80000 00:16:06.655 nvme0n2 : 5.05 1723.73 6.73 0.00 0.00 73976.62 11081.54 71017.19 00:16:06.655 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x80000 length 0x80000 00:16:06.655 nvme0n2 : 5.03 1526.53 5.96 0.00 0.00 83521.12 20375.74 93418.59 00:16:06.655 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x0 length 0x80000 00:16:06.655 nvme0n3 : 5.06 1718.86 6.71 0.00 0.00 74045.80 12511.42 65297.69 00:16:06.655 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x80000 length 0x80000 00:16:06.655 nvme0n3 : 5.08 1538.51 6.01 0.00 0.00 82700.10 5868.45 99614.72 00:16:06.655 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x0 length 0x20000 00:16:06.655 nvme1n1 : 5.06 1721.29 6.72 0.00 0.00 73804.32 8936.73 76260.07 00:16:06.655 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x20000 length 0x20000 00:16:06.655 nvme1n1 : 5.08 1537.94 6.01 0.00 0.00 82569.13 6553.60 104380.97 00:16:06.655 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x0 length 0xbd0bd 00:16:06.655 nvme2n1 : 5.07 3063.89 11.97 0.00 0.00 41330.63 4885.41 64821.06 00:16:06.655 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:06.655 nvme2n1 : 5.07 2825.90 11.04 0.00 0.00 44758.17 4766.25 76260.07 00:16:06.655 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0x0 length 0xa0000 00:16:06.655 nvme3n1 : 5.06 1719.89 6.72 0.00 0.00 73548.54 7119.59 79119.83 00:16:06.655 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.655 Verification LBA range: start 0xa0000 length 0xa0000 00:16:06.655 nvme3n1 : 5.07 1539.63 6.01 0.00 0.00 82087.03 7745.16 100567.97 00:16:06.655 =================================================================================================================== 00:16:06.655 Total : 22154.46 86.54 0.00 0.00 68816.35 4766.25 104380.97 00:16:07.589 00:16:07.589 real 0m7.368s 00:16:07.589 user 0m11.436s 00:16:07.589 sys 0m1.772s 00:16:07.589 18:08:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:07.589 ************************************ 00:16:07.589 END TEST bdev_verify 00:16:07.589 ************************************ 00:16:07.589 18:08:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:07.846 18:08:00 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:07.846 18:08:00 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:16:07.846 18:08:00 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:07.846 18:08:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:07.846 ************************************ 00:16:07.846 START TEST bdev_verify_big_io 00:16:07.846 ************************************ 00:16:07.846 18:08:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:07.846 [2024-05-15 18:08:00.215897] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:07.846 [2024-05-15 18:08:00.216086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74957 ] 00:16:08.104 [2024-05-15 18:08:00.389532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:08.362 [2024-05-15 18:08:00.627970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.362 [2024-05-15 18:08:00.627981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.928 Running I/O for 5 seconds... 00:16:15.489 00:16:15.490 Latency(us) 00:16:15.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.490 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x0 length 0x8000 00:16:15.490 nvme0n1 : 5.89 78.76 4.92 0.00 0.00 1599278.19 21209.83 2684354.56 00:16:15.490 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x8000 length 0x8000 00:16:15.490 nvme0n1 : 5.90 151.99 9.50 0.00 0.00 816506.22 68157.44 1052389.00 00:16:15.490 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x0 length 0x8000 00:16:15.490 nvme0n2 : 5.88 102.00 6.38 0.00 0.00 1200938.46 38844.97 1830241.75 00:16:15.490 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x8000 length 0x8000 00:16:15.490 nvme0n2 : 5.92 140.60 8.79 0.00 0.00 859312.73 105334.23 1380307.32 00:16:15.490 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x0 length 0x8000 00:16:15.490 nvme0n3 : 5.87 132.14 8.26 0.00 0.00 903752.80 23116.33 964689.92 00:16:15.490 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x8000 length 0x8000 00:16:15.490 nvme0n3 : 5.90 81.37 5.09 0.00 0.00 1434208.69 119156.36 2470826.36 00:16:15.490 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x0 length 0x2000 00:16:15.490 nvme1n1 : 5.89 138.66 8.67 0.00 0.00 837271.76 31695.59 888429.85 00:16:15.490 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x2000 length 0x2000 00:16:15.490 nvme1n1 : 5.93 105.22 6.58 0.00 0.00 1083204.48 25380.31 2562338.44 00:16:15.490 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x0 length 0xbd0b 00:16:15.490 nvme2n1 : 5.88 163.51 10.22 0.00 0.00 690577.83 9472.93 911307.87 00:16:15.490 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:15.490 nvme2n1 : 5.92 151.62 9.48 0.00 0.00 730065.57 9115.46 1349803.29 00:16:15.490 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0x0 length 0xa000 00:16:15.490 nvme3n1 : 5.89 122.29 7.64 0.00 0.00 895465.37 19065.02 2089525.99 00:16:15.490 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.490 Verification LBA range: start 0xa000 length 0xa000 00:16:15.490 nvme3n1 : 5.93 125.55 7.85 0.00 0.00 852254.28 11260.28 1395559.33 00:16:15.490 =================================================================================================================== 00:16:15.490 Total : 1493.71 93.36 0.00 0.00 937020.55 9115.46 2684354.56 00:16:16.425 00:16:16.425 real 0m8.500s 00:16:16.425 user 0m15.181s 00:16:16.425 sys 0m0.578s 00:16:16.425 18:08:08 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:16.425 18:08:08 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.425 ************************************ 00:16:16.425 END TEST bdev_verify_big_io 00:16:16.425 ************************************ 00:16:16.425 18:08:08 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.425 18:08:08 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:16:16.425 18:08:08 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:16.425 18:08:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.425 ************************************ 00:16:16.425 START TEST bdev_write_zeroes 00:16:16.425 ************************************ 00:16:16.425 18:08:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.425 [2024-05-15 18:08:08.761654] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:16.425 [2024-05-15 18:08:08.761846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75069 ] 00:16:16.426 [2024-05-15 18:08:08.926039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.684 [2024-05-15 18:08:09.169638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.248 Running I/O for 1 seconds... 00:16:18.235 00:16:18.235 Latency(us) 00:16:18.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.235 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.235 nvme0n1 : 1.01 9157.67 35.77 0.00 0.00 13961.48 7923.90 20971.52 00:16:18.235 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.235 nvme0n2 : 1.02 9143.10 35.72 0.00 0.00 13970.76 7864.32 20137.43 00:16:18.235 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.235 nvme0n3 : 1.02 9129.60 35.66 0.00 0.00 13976.03 7864.32 20375.74 00:16:18.235 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.235 nvme1n1 : 1.03 9116.08 35.61 0.00 0.00 13985.66 7864.32 20971.52 00:16:18.235 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.235 nvme2n1 : 1.03 16059.42 62.73 0.00 0.00 7893.30 2934.23 15013.70 00:16:18.235 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.235 nvme3n1 : 1.03 9092.38 35.52 0.00 0.00 13957.85 6791.91 21924.77 00:16:18.235 =================================================================================================================== 00:16:18.235 Total : 61698.24 241.01 0.00 0.00 12382.51 2934.23 21924.77 00:16:19.610 00:16:19.610 real 0m3.182s 00:16:19.610 user 0m2.390s 00:16:19.610 sys 0m0.608s 00:16:19.610 18:08:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:19.610 18:08:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:19.610 ************************************ 00:16:19.610 END TEST bdev_write_zeroes 00:16:19.610 ************************************ 00:16:19.610 18:08:11 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:19.610 18:08:11 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:16:19.610 18:08:11 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:19.610 18:08:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.610 ************************************ 00:16:19.610 START TEST bdev_json_nonenclosed 00:16:19.610 ************************************ 00:16:19.610 18:08:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:19.610 [2024-05-15 18:08:11.995313] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:19.610 [2024-05-15 18:08:11.995489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75134 ] 00:16:19.868 [2024-05-15 18:08:12.159703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.127 [2024-05-15 18:08:12.405993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.127 [2024-05-15 18:08:12.406160] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:20.127 [2024-05-15 18:08:12.406205] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:20.127 [2024-05-15 18:08:12.406237] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:20.385 00:16:20.385 real 0m0.899s 00:16:20.385 user 0m0.648s 00:16:20.385 sys 0m0.145s 00:16:20.385 18:08:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:20.385 18:08:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:20.385 ************************************ 00:16:20.385 END TEST bdev_json_nonenclosed 00:16:20.385 ************************************ 00:16:20.385 18:08:12 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.385 18:08:12 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:16:20.385 18:08:12 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:20.385 18:08:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.385 ************************************ 00:16:20.385 START TEST bdev_json_nonarray 00:16:20.385 ************************************ 00:16:20.385 18:08:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.643 [2024-05-15 18:08:12.955917] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:20.643 [2024-05-15 18:08:12.956091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75159 ] 00:16:20.643 [2024-05-15 18:08:13.129520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.902 [2024-05-15 18:08:13.371146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.902 [2024-05-15 18:08:13.371322] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:20.902 [2024-05-15 18:08:13.371386] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:20.902 [2024-05-15 18:08:13.371418] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:21.469 00:16:21.469 real 0m0.914s 00:16:21.469 user 0m0.664s 00:16:21.469 sys 0m0.143s 00:16:21.469 18:08:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:21.469 18:08:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:21.469 ************************************ 00:16:21.469 END TEST bdev_json_nonarray 00:16:21.469 ************************************ 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:21.469 18:08:13 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:22.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.942 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.942 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.942 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.201 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.201 00:16:24.201 real 1m3.632s 00:16:24.201 user 1m45.252s 00:16:24.201 sys 0m29.488s 00:16:24.201 18:08:16 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:24.201 18:08:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.201 ************************************ 00:16:24.201 END TEST blockdev_xnvme 00:16:24.201 ************************************ 00:16:24.201 18:08:16 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:24.201 18:08:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:24.201 18:08:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:24.201 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:24.201 ************************************ 00:16:24.201 START TEST ublk 00:16:24.201 ************************************ 00:16:24.201 18:08:16 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:24.201 * Looking for test storage... 00:16:24.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:24.201 18:08:16 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:24.201 18:08:16 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:24.201 18:08:16 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:24.201 18:08:16 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:24.201 18:08:16 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:24.201 18:08:16 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:24.201 18:08:16 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:24.201 18:08:16 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:24.201 18:08:16 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:24.201 18:08:16 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:24.201 18:08:16 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:24.201 18:08:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.201 ************************************ 00:16:24.201 START TEST test_save_ublk_config 00:16:24.201 ************************************ 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75446 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75446 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 75446 ']' 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:24.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:24.201 18:08:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.459 [2024-05-15 18:08:16.810926] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:24.459 [2024-05-15 18:08:16.811099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75446 ] 00:16:24.717 [2024-05-15 18:08:16.983069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.976 [2024-05-15 18:08:17.253976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:25.911 [2024-05-15 18:08:18.072371] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.911 [2024-05-15 18:08:18.073544] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.911 malloc0 00:16:25.911 [2024-05-15 18:08:18.162499] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:25.911 [2024-05-15 18:08:18.162608] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:25.911 [2024-05-15 18:08:18.162633] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:25.911 [2024-05-15 18:08:18.162642] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:25.911 [2024-05-15 18:08:18.166630] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:25.911 [2024-05-15 18:08:18.166660] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:25.911 [2024-05-15 18:08:18.177384] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:25.911 [2024-05-15 18:08:18.177566] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:25.911 [2024-05-15 18:08:18.201339] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:25.911 0 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.911 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:26.170 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.170 18:08:18 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:26.170 "subsystems": [ 00:16:26.170 { 00:16:26.170 "subsystem": "keyring", 00:16:26.170 "config": [] 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "subsystem": "iobuf", 00:16:26.170 "config": [ 00:16:26.170 { 00:16:26.170 "method": "iobuf_set_options", 00:16:26.170 "params": { 00:16:26.170 "small_pool_count": 8192, 00:16:26.170 "large_pool_count": 1024, 00:16:26.170 "small_bufsize": 8192, 00:16:26.170 "large_bufsize": 135168 00:16:26.170 } 00:16:26.170 } 00:16:26.170 ] 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "subsystem": "sock", 00:16:26.170 "config": [ 00:16:26.170 { 00:16:26.170 "method": "sock_impl_set_options", 00:16:26.170 "params": { 00:16:26.170 "impl_name": "posix", 00:16:26.170 "recv_buf_size": 2097152, 00:16:26.170 "send_buf_size": 2097152, 00:16:26.170 "enable_recv_pipe": true, 00:16:26.170 "enable_quickack": false, 00:16:26.170 "enable_placement_id": 0, 00:16:26.170 "enable_zerocopy_send_server": true, 00:16:26.170 "enable_zerocopy_send_client": false, 00:16:26.170 "zerocopy_threshold": 0, 00:16:26.170 "tls_version": 0, 00:16:26.170 "enable_ktls": false 00:16:26.170 } 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "method": "sock_impl_set_options", 00:16:26.170 "params": { 00:16:26.170 "impl_name": "ssl", 00:16:26.170 "recv_buf_size": 4096, 00:16:26.170 "send_buf_size": 4096, 00:16:26.170 "enable_recv_pipe": true, 00:16:26.170 "enable_quickack": false, 00:16:26.170 "enable_placement_id": 0, 00:16:26.170 "enable_zerocopy_send_server": true, 00:16:26.170 "enable_zerocopy_send_client": false, 00:16:26.170 "zerocopy_threshold": 0, 00:16:26.170 "tls_version": 0, 00:16:26.170 "enable_ktls": false 00:16:26.170 } 00:16:26.170 } 00:16:26.170 ] 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "subsystem": "vmd", 00:16:26.170 "config": [] 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "subsystem": "accel", 00:16:26.170 "config": [ 00:16:26.170 { 00:16:26.170 "method": "accel_set_options", 00:16:26.170 "params": { 00:16:26.170 "small_cache_size": 128, 00:16:26.170 "large_cache_size": 16, 00:16:26.170 "task_count": 2048, 00:16:26.170 "sequence_count": 2048, 00:16:26.170 "buf_count": 2048 00:16:26.170 } 00:16:26.170 } 00:16:26.170 ] 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "subsystem": "bdev", 00:16:26.170 "config": [ 00:16:26.170 { 00:16:26.170 "method": "bdev_set_options", 00:16:26.170 "params": { 00:16:26.170 "bdev_io_pool_size": 65535, 00:16:26.170 "bdev_io_cache_size": 256, 00:16:26.170 "bdev_auto_examine": true, 00:16:26.170 "iobuf_small_cache_size": 128, 00:16:26.170 "iobuf_large_cache_size": 16 00:16:26.170 } 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "method": "bdev_raid_set_options", 00:16:26.170 "params": { 00:16:26.170 "process_window_size_kb": 1024 00:16:26.170 } 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "method": "bdev_iscsi_set_options", 00:16:26.170 "params": { 00:16:26.170 "timeout_sec": 30 00:16:26.170 } 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "method": "bdev_nvme_set_options", 00:16:26.170 "params": { 00:16:26.170 "action_on_timeout": "none", 00:16:26.170 "timeout_us": 0, 00:16:26.170 "timeout_admin_us": 0, 00:16:26.170 "keep_alive_timeout_ms": 10000, 00:16:26.170 "arbitration_burst": 0, 00:16:26.170 "low_priority_weight": 0, 00:16:26.170 "medium_priority_weight": 0, 00:16:26.170 "high_priority_weight": 0, 00:16:26.170 "nvme_adminq_poll_period_us": 10000, 00:16:26.170 "nvme_ioq_poll_period_us": 0, 00:16:26.170 "io_queue_requests": 0, 00:16:26.170 "delay_cmd_submit": true, 00:16:26.170 "transport_retry_count": 4, 00:16:26.170 "bdev_retry_count": 3, 00:16:26.170 "transport_ack_timeout": 0, 00:16:26.170 "ctrlr_loss_timeout_sec": 0, 00:16:26.170 "reconnect_delay_sec": 0, 00:16:26.170 "fast_io_fail_timeout_sec": 0, 00:16:26.170 "disable_auto_failback": false, 00:16:26.170 "generate_uuids": false, 00:16:26.170 "transport_tos": 0, 00:16:26.170 "nvme_error_stat": false, 00:16:26.170 "rdma_srq_size": 0, 00:16:26.170 "io_path_stat": false, 00:16:26.170 "allow_accel_sequence": false, 00:16:26.170 "rdma_max_cq_size": 0, 00:16:26.170 "rdma_cm_event_timeout_ms": 0, 00:16:26.170 "dhchap_digests": [ 00:16:26.170 "sha256", 00:16:26.170 "sha384", 00:16:26.170 "sha512" 00:16:26.170 ], 00:16:26.170 "dhchap_dhgroups": [ 00:16:26.170 "null", 00:16:26.170 "ffdhe2048", 00:16:26.170 "ffdhe3072", 00:16:26.170 "ffdhe4096", 00:16:26.170 "ffdhe6144", 00:16:26.170 "ffdhe8192" 00:16:26.170 ] 00:16:26.170 } 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "method": "bdev_nvme_set_hotplug", 00:16:26.170 "params": { 00:16:26.170 "period_us": 100000, 00:16:26.170 "enable": false 00:16:26.170 } 00:16:26.170 }, 00:16:26.170 { 00:16:26.170 "method": "bdev_malloc_create", 00:16:26.170 "params": { 00:16:26.170 "name": "malloc0", 00:16:26.170 "num_blocks": 8192, 00:16:26.170 "block_size": 4096, 00:16:26.170 "physical_block_size": 4096, 00:16:26.170 "uuid": "e4994091-e4e8-4d13-b849-496ac79c239e", 00:16:26.170 "optimal_io_boundary": 0 00:16:26.170 } 00:16:26.170 }, 00:16:26.170 { 00:16:26.171 "method": "bdev_wait_for_examine" 00:16:26.171 } 00:16:26.171 ] 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "scsi", 00:16:26.171 "config": null 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "scheduler", 00:16:26.171 "config": [ 00:16:26.171 { 00:16:26.171 "method": "framework_set_scheduler", 00:16:26.171 "params": { 00:16:26.171 "name": "static" 00:16:26.171 } 00:16:26.171 } 00:16:26.171 ] 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "vhost_scsi", 00:16:26.171 "config": [] 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "vhost_blk", 00:16:26.171 "config": [] 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "ublk", 00:16:26.171 "config": [ 00:16:26.171 { 00:16:26.171 "method": "ublk_create_target", 00:16:26.171 "params": { 00:16:26.171 "cpumask": "1" 00:16:26.171 } 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "method": "ublk_start_disk", 00:16:26.171 "params": { 00:16:26.171 "bdev_name": "malloc0", 00:16:26.171 "ublk_id": 0, 00:16:26.171 "num_queues": 1, 00:16:26.171 "queue_depth": 128 00:16:26.171 } 00:16:26.171 } 00:16:26.171 ] 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "nbd", 00:16:26.171 "config": [] 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "nvmf", 00:16:26.171 "config": [ 00:16:26.171 { 00:16:26.171 "method": "nvmf_set_config", 00:16:26.171 "params": { 00:16:26.171 "discovery_filter": "match_any", 00:16:26.171 "admin_cmd_passthru": { 00:16:26.171 "identify_ctrlr": false 00:16:26.171 } 00:16:26.171 } 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "method": "nvmf_set_max_subsystems", 00:16:26.171 "params": { 00:16:26.171 "max_subsystems": 1024 00:16:26.171 } 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "method": "nvmf_set_crdt", 00:16:26.171 "params": { 00:16:26.171 "crdt1": 0, 00:16:26.171 "crdt2": 0, 00:16:26.171 "crdt3": 0 00:16:26.171 } 00:16:26.171 } 00:16:26.171 ] 00:16:26.171 }, 00:16:26.171 { 00:16:26.171 "subsystem": "iscsi", 00:16:26.171 "config": [ 00:16:26.171 { 00:16:26.171 "method": "iscsi_set_options", 00:16:26.171 "params": { 00:16:26.171 "node_base": "iqn.2016-06.io.spdk", 00:16:26.171 "max_sessions": 128, 00:16:26.171 "max_connections_per_session": 2, 00:16:26.171 "max_queue_depth": 64, 00:16:26.171 "default_time2wait": 2, 00:16:26.171 "default_time2retain": 20, 00:16:26.171 "first_burst_length": 8192, 00:16:26.171 "immediate_data": true, 00:16:26.171 "allow_duplicated_isid": false, 00:16:26.171 "error_recovery_level": 0, 00:16:26.171 "nop_timeout": 60, 00:16:26.171 "nop_in_interval": 30, 00:16:26.171 "disable_chap": false, 00:16:26.171 "require_chap": false, 00:16:26.171 "mutual_chap": false, 00:16:26.171 "chap_group": 0, 00:16:26.171 "max_large_datain_per_connection": 64, 00:16:26.171 "max_r2t_per_connection": 4, 00:16:26.171 "pdu_pool_size": 36864, 00:16:26.171 "immediate_data_pool_size": 16384, 00:16:26.171 "data_out_pool_size": 2048 00:16:26.171 } 00:16:26.171 } 00:16:26.171 ] 00:16:26.171 } 00:16:26.171 ] 00:16:26.171 }' 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75446 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 75446 ']' 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 75446 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75446 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:26.171 killing process with pid 75446 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75446' 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 75446 00:16:26.171 18:08:18 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 75446 00:16:27.547 [2024-05-15 18:08:19.851844] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:27.547 [2024-05-15 18:08:19.884423] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:27.547 [2024-05-15 18:08:19.888329] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:27.547 [2024-05-15 18:08:19.896370] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:27.547 [2024-05-15 18:08:19.896448] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:27.547 [2024-05-15 18:08:19.896465] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:27.547 [2024-05-15 18:08:19.896502] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:27.547 [2024-05-15 18:08:19.896723] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75506 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75506 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 75506 ']' 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:28.923 18:08:21 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:28.923 "subsystems": [ 00:16:28.923 { 00:16:28.923 "subsystem": "keyring", 00:16:28.923 "config": [] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "iobuf", 00:16:28.923 "config": [ 00:16:28.923 { 00:16:28.923 "method": "iobuf_set_options", 00:16:28.923 "params": { 00:16:28.923 "small_pool_count": 8192, 00:16:28.923 "large_pool_count": 1024, 00:16:28.923 "small_bufsize": 8192, 00:16:28.923 "large_bufsize": 135168 00:16:28.923 } 00:16:28.923 } 00:16:28.923 ] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "sock", 00:16:28.923 "config": [ 00:16:28.923 { 00:16:28.923 "method": "sock_impl_set_options", 00:16:28.923 "params": { 00:16:28.923 "impl_name": "posix", 00:16:28.923 "recv_buf_size": 2097152, 00:16:28.923 "send_buf_size": 2097152, 00:16:28.923 "enable_recv_pipe": true, 00:16:28.923 "enable_quickack": false, 00:16:28.923 "enable_placement_id": 0, 00:16:28.923 "enable_zerocopy_send_server": true, 00:16:28.923 "enable_zerocopy_send_client": false, 00:16:28.923 "zerocopy_threshold": 0, 00:16:28.923 "tls_version": 0, 00:16:28.923 "enable_ktls": false 00:16:28.923 } 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "method": "sock_impl_set_options", 00:16:28.923 "params": { 00:16:28.923 "impl_name": "ssl", 00:16:28.923 "recv_buf_size": 4096, 00:16:28.923 "send_buf_size": 4096, 00:16:28.923 "enable_recv_pipe": true, 00:16:28.923 "enable_quickack": false, 00:16:28.923 "enable_placement_id": 0, 00:16:28.923 "enable_zerocopy_send_server": true, 00:16:28.923 "enable_zerocopy_send_client": false, 00:16:28.923 "zerocopy_threshold": 0, 00:16:28.923 "tls_version": 0, 00:16:28.923 "enable_ktls": false 00:16:28.923 } 00:16:28.923 } 00:16:28.923 ] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "vmd", 00:16:28.923 "config": [] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "accel", 00:16:28.923 "config": [ 00:16:28.923 { 00:16:28.923 "method": "accel_set_options", 00:16:28.923 "params": { 00:16:28.923 "small_cache_size": 128, 00:16:28.923 "large_cache_size": 16, 00:16:28.923 "task_count": 2048, 00:16:28.923 "sequence_count": 2048, 00:16:28.923 "buf_count": 2048 00:16:28.923 } 00:16:28.923 } 00:16:28.923 ] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "bdev", 00:16:28.923 "config": [ 00:16:28.923 { 00:16:28.923 "method": "bdev_set_options", 00:16:28.923 "params": { 00:16:28.923 "bdev_io_pool_size": 65535, 00:16:28.923 "bdev_io_cache_size": 256, 00:16:28.923 "bdev_auto_examine": true, 00:16:28.923 "iobuf_small_cache_size": 128, 00:16:28.923 "iobuf_large_cache_size": 16 00:16:28.923 } 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "method": "bdev_raid_set_options", 00:16:28.923 "params": { 00:16:28.923 "process_window_size_kb": 1024 00:16:28.923 } 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "method": "bdev_iscsi_set_options", 00:16:28.923 "params": { 00:16:28.923 "timeout_sec": 30 00:16:28.923 } 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "method": "bdev_nvme_set_options", 00:16:28.923 "params": { 00:16:28.923 "action_on_timeout": "none", 00:16:28.923 "timeout_us": 0, 00:16:28.923 "timeout_admin_us": 0, 00:16:28.923 "keep_alive_timeout_ms": 10000, 00:16:28.923 "arbitration_burst": 0, 00:16:28.923 "low_priority_weight": 0, 00:16:28.923 "medium_priority_weight": 0, 00:16:28.923 "high_priority_weight": 0, 00:16:28.923 "nvme_adminq_poll_period_us": 10000, 00:16:28.923 "nvme_ioq_poll_period_us": 0, 00:16:28.923 "io_queue_requests": 0, 00:16:28.923 "delay_cmd_submit": true, 00:16:28.923 "transport_retry_count": 4, 00:16:28.923 "bdev_retry_count": 3, 00:16:28.923 "transport_ack_timeout": 0, 00:16:28.923 "ctrlr_loss_timeout_sec": 0, 00:16:28.923 "reconnect_delay_sec": 0, 00:16:28.923 "fast_io_fail_timeout_sec": 0, 00:16:28.923 "disable_auto_failback": false, 00:16:28.923 "generate_uuids": false, 00:16:28.923 "transport_tos": 0, 00:16:28.923 "nvme_error_stat": false, 00:16:28.923 "rdma_srq_size": 0, 00:16:28.923 "io_path_stat": false, 00:16:28.923 "allow_accel_sequence": false, 00:16:28.923 "rdma_max_cq_size": 0, 00:16:28.923 "rdma_cm_event_timeout_ms": 0, 00:16:28.923 "dhchap_digests": [ 00:16:28.923 "sha256", 00:16:28.923 "sha384", 00:16:28.923 "sha512" 00:16:28.923 ], 00:16:28.923 "dhchap_dhgroups": [ 00:16:28.923 "null", 00:16:28.923 "ffdhe2048", 00:16:28.923 "ffdhe3072", 00:16:28.923 "ffdhe4096", 00:16:28.923 "ffdhe6144", 00:16:28.923 "ffdhe8192" 00:16:28.923 ] 00:16:28.923 } 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "method": "bdev_nvme_set_hotplug", 00:16:28.923 "params": { 00:16:28.923 "period_us": 100000, 00:16:28.923 "enable": false 00:16:28.923 } 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "method": "bdev_malloc_create", 00:16:28.923 "params": { 00:16:28.923 "name": "malloc0", 00:16:28.923 "num_blocks": 8192, 00:16:28.923 "block_size": 4096, 00:16:28.923 "physical_block_size": 4096, 00:16:28.923 "uuid": "e4994091-e4e8-4d13-b849-496ac79c239e", 00:16:28.923 "optimal_io_boundary": 0 00:16:28.923 } 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "method": "bdev_wait_for_examine" 00:16:28.923 } 00:16:28.923 ] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "scsi", 00:16:28.923 "config": null 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "scheduler", 00:16:28.923 "config": [ 00:16:28.923 { 00:16:28.923 "method": "framework_set_scheduler", 00:16:28.923 "params": { 00:16:28.923 "name": "static" 00:16:28.923 } 00:16:28.923 } 00:16:28.923 ] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "vhost_scsi", 00:16:28.923 "config": [] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "vhost_blk", 00:16:28.923 "config": [] 00:16:28.923 }, 00:16:28.923 { 00:16:28.923 "subsystem": "ublk", 00:16:28.923 "config": [ 00:16:28.924 { 00:16:28.924 "method": "ublk_create_target", 00:16:28.924 "params": { 00:16:28.924 "cpumask": "1" 00:16:28.924 } 00:16:28.924 }, 00:16:28.924 { 00:16:28.924 "method": "ublk_start_disk", 00:16:28.924 "params": { 00:16:28.924 "bdev_name": "malloc0", 00:16:28.924 "ublk_id": 0, 00:16:28.924 "num_queues": 1, 00:16:28.924 "queue_depth": 128 00:16:28.924 } 00:16:28.924 } 00:16:28.924 ] 00:16:28.924 }, 00:16:28.924 { 00:16:28.924 "subsystem": "nbd", 00:16:28.924 "config": [] 00:16:28.924 }, 00:16:28.924 { 00:16:28.924 "subsystem": "nvmf", 00:16:28.924 "config": [ 00:16:28.924 { 00:16:28.924 "method": "nvmf_set_config", 00:16:28.924 "params": { 00:16:28.924 "discovery_filter": "match_any", 00:16:28.924 "admin_cmd_passthru": { 00:16:28.924 "identify_ctrlr": false 00:16:28.924 } 00:16:28.924 } 00:16:28.924 }, 00:16:28.924 { 00:16:28.924 "method": "nvmf_set_max_subsystems", 00:16:28.924 "params": { 00:16:28.924 "max_subsystems": 1024 00:16:28.924 } 00:16:28.924 }, 00:16:28.924 { 00:16:28.924 "method": "nvmf_set_crdt", 00:16:28.924 "params": { 00:16:28.924 "crdt1": 0, 00:16:28.924 "crdt2": 0, 00:16:28.924 "crdt3": 0 00:16:28.924 } 00:16:28.924 } 00:16:28.924 ] 00:16:28.924 }, 00:16:28.924 { 00:16:28.924 "subsystem": "iscsi", 00:16:28.924 "config": [ 00:16:28.924 { 00:16:28.924 "method": "iscsi_set_options", 00:16:28.924 "params": { 00:16:28.924 "node_base": "iqn.2016-06.io.spdk", 00:16:28.924 "max_sessions": 128, 00:16:28.924 "max_connections_per_session": 2, 00:16:28.924 "max_queue_depth": 64, 00:16:28.924 "default_time2wait": 2, 00:16:28.924 "default_time2retain": 20, 00:16:28.924 "first_burst_length": 8192, 00:16:28.924 "immediate_data": true, 00:16:28.924 "allow_duplicated_isid": false, 00:16:28.924 "error_recovery_level": 0, 00:16:28.924 "nop_timeout": 60, 00:16:28.924 "nop_in_interval": 30, 00:16:28.924 "disable_chap": false, 00:16:28.924 "require_chap": false, 00:16:28.924 "mutual_chap": false, 00:16:28.924 "chap_group": 0, 00:16:28.924 "max_large_datain_per_connection": 64, 00:16:28.924 "max_r2t_per_connection": 4, 00:16:28.924 "pdu_pool_size": 36864, 00:16:28.924 "immediate_data_pool_size": 16384, 00:16:28.924 "data_out_pool_size": 2048 00:16:28.924 } 00:16:28.924 } 00:16:28.924 ] 00:16:28.924 } 00:16:28.924 ] 00:16:28.924 }' 00:16:28.924 [2024-05-15 18:08:21.286489] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:28.924 [2024-05-15 18:08:21.286661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75506 ] 00:16:29.183 [2024-05-15 18:08:21.457007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.442 [2024-05-15 18:08:21.707253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.378 [2024-05-15 18:08:22.643365] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:30.378 [2024-05-15 18:08:22.644507] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:30.378 [2024-05-15 18:08:22.651427] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:30.378 [2024-05-15 18:08:22.651528] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:30.378 [2024-05-15 18:08:22.651547] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:30.378 [2024-05-15 18:08:22.651557] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:30.378 [2024-05-15 18:08:22.660404] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:30.378 [2024-05-15 18:08:22.660438] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:30.378 [2024-05-15 18:08:22.667337] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:30.378 [2024-05-15 18:08:22.667476] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:30.378 [2024-05-15 18:08:22.684323] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75506 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 75506 ']' 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 75506 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75506 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:30.378 killing process with pid 75506 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75506' 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 75506 00:16:30.378 18:08:22 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 75506 00:16:32.279 [2024-05-15 18:08:24.260820] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:32.279 [2024-05-15 18:08:24.298406] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:32.279 [2024-05-15 18:08:24.298731] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:32.279 [2024-05-15 18:08:24.307376] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:32.279 [2024-05-15 18:08:24.307464] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:32.279 [2024-05-15 18:08:24.307485] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:32.279 [2024-05-15 18:08:24.307526] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:32.279 [2024-05-15 18:08:24.311554] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:33.215 18:08:25 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:33.215 00:16:33.215 real 0m8.915s 00:16:33.215 user 0m7.551s 00:16:33.215 sys 0m2.183s 00:16:33.215 18:08:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:33.215 ************************************ 00:16:33.215 END TEST test_save_ublk_config 00:16:33.215 ************************************ 00:16:33.215 18:08:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:33.215 18:08:25 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75590 00:16:33.215 18:08:25 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.215 18:08:25 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75590 00:16:33.215 18:08:25 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:33.215 18:08:25 ublk -- common/autotest_common.sh@827 -- # '[' -z 75590 ']' 00:16:33.215 18:08:25 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.215 18:08:25 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:33.215 18:08:25 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.215 18:08:25 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:33.215 18:08:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.501 [2024-05-15 18:08:25.764751] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:33.501 [2024-05-15 18:08:25.764927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75590 ] 00:16:33.501 [2024-05-15 18:08:25.942550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.761 [2024-05-15 18:08:26.235173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.761 [2024-05-15 18:08:26.235175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.728 18:08:27 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:34.728 18:08:27 ublk -- common/autotest_common.sh@860 -- # return 0 00:16:34.728 18:08:27 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:34.728 18:08:27 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:34.728 18:08:27 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:34.728 18:08:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.728 ************************************ 00:16:34.728 START TEST test_create_ublk 00:16:34.728 ************************************ 00:16:34.728 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:16:34.728 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:34.728 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.728 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.728 [2024-05-15 18:08:27.106325] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:34.728 [2024-05-15 18:08:27.109145] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:34.728 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.728 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:34.728 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:34.728 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.728 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.986 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.986 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:34.986 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:34.986 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.986 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.986 [2024-05-15 18:08:27.426504] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:34.986 [2024-05-15 18:08:27.427047] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:34.986 [2024-05-15 18:08:27.427074] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:34.986 [2024-05-15 18:08:27.427085] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:34.986 [2024-05-15 18:08:27.435674] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:34.986 [2024-05-15 18:08:27.435730] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:34.986 [2024-05-15 18:08:27.442347] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:34.986 [2024-05-15 18:08:27.456659] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:35.245 [2024-05-15 18:08:27.488339] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:35.245 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:35.245 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.245 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.245 18:08:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:35.245 { 00:16:35.245 "ublk_device": "/dev/ublkb0", 00:16:35.245 "id": 0, 00:16:35.245 "queue_depth": 512, 00:16:35.245 "num_queues": 4, 00:16:35.245 "bdev_name": "Malloc0" 00:16:35.245 } 00:16:35.245 ]' 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:35.245 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:35.503 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:35.503 18:08:27 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:35.503 18:08:27 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:35.503 18:08:27 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:35.503 18:08:27 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:35.504 18:08:27 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:35.504 fio: verification read phase will never start because write phase uses all of runtime 00:16:35.504 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:35.504 fio-3.35 00:16:35.504 Starting 1 process 00:16:45.577 00:16:45.577 fio_test: (groupid=0, jobs=1): err= 0: pid=75642: Wed May 15 18:08:37 2024 00:16:45.577 write: IOPS=10.3k, BW=40.4MiB/s (42.3MB/s)(404MiB/10001msec); 0 zone resets 00:16:45.577 clat (usec): min=59, max=8016, avg=95.36, stdev=159.91 00:16:45.577 lat (usec): min=59, max=8020, avg=96.09, stdev=159.93 00:16:45.577 clat percentiles (usec): 00:16:45.577 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 80], 00:16:45.577 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:16:45.577 | 70.00th=[ 87], 80.00th=[ 91], 90.00th=[ 98], 95.00th=[ 109], 00:16:45.577 | 99.00th=[ 135], 99.50th=[ 159], 99.90th=[ 3261], 99.95th=[ 3523], 00:16:45.577 | 99.99th=[ 3752] 00:16:45.577 bw ( KiB/s): min=19025, max=44168, per=99.83%, avg=41283.00, stdev=5594.93, samples=19 00:16:45.577 iops : min= 4756, max=11042, avg=10320.74, stdev=1398.79, samples=19 00:16:45.577 lat (usec) : 100=92.04%, 250=7.54%, 500=0.02%, 750=0.01%, 1000=0.03% 00:16:45.577 lat (msec) : 2=0.11%, 4=0.25%, 10=0.01% 00:16:45.577 cpu : usr=2.50%, sys=7.34%, ctx=103409, majf=0, minf=795 00:16:45.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.577 issued rwts: total=0,103394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.577 00:16:45.577 Run status group 0 (all jobs): 00:16:45.577 WRITE: bw=40.4MiB/s (42.3MB/s), 40.4MiB/s-40.4MiB/s (42.3MB/s-42.3MB/s), io=404MiB (424MB), run=10001-10001msec 00:16:45.577 00:16:45.577 Disk stats (read/write): 00:16:45.577 ublkb0: ios=0/102301, merge=0/0, ticks=0/8983, in_queue=8983, util=99.11% 00:16:45.577 18:08:37 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:45.577 18:08:37 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.577 18:08:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.577 [2024-05-15 18:08:37.992874] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.577 [2024-05-15 18:08:38.030116] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.577 [2024-05-15 18:08:38.034614] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.577 [2024-05-15 18:08:38.041329] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.577 [2024-05-15 18:08:38.041697] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:45.577 [2024-05-15 18:08:38.041719] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.577 18:08:38 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.577 [2024-05-15 18:08:38.054539] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:45.577 request: 00:16:45.577 { 00:16:45.577 "ublk_id": 0, 00:16:45.577 "method": "ublk_stop_disk", 00:16:45.577 "req_id": 1 00:16:45.577 } 00:16:45.577 Got JSON-RPC error response 00:16:45.577 response: 00:16:45.577 { 00:16:45.577 "code": -19, 00:16:45.577 "message": "No such device" 00:16:45.577 } 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:45.577 18:08:38 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.577 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.577 [2024-05-15 18:08:38.072465] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:45.836 [2024-05-15 18:08:38.078647] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:45.836 [2024-05-15 18:08:38.078702] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:45.836 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.836 18:08:38 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:45.836 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.836 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.094 18:08:38 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:46.094 18:08:38 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:46.094 00:16:46.094 real 0m11.448s 00:16:46.094 user 0m0.670s 00:16:46.094 sys 0m0.845s 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.094 ************************************ 00:16:46.094 END TEST test_create_ublk 00:16:46.094 ************************************ 00:16:46.094 18:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.094 18:08:38 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:46.094 18:08:38 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:46.094 18:08:38 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.094 18:08:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.352 ************************************ 00:16:46.352 START TEST test_create_multi_ublk 00:16:46.352 ************************************ 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.352 [2024-05-15 18:08:38.605320] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:46.352 [2024-05-15 18:08:38.608075] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.352 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.610 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.611 [2024-05-15 18:08:38.877504] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:46.611 [2024-05-15 18:08:38.878053] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:46.611 [2024-05-15 18:08:38.878079] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:46.611 [2024-05-15 18:08:38.878093] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.611 [2024-05-15 18:08:38.883859] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.611 [2024-05-15 18:08:38.883941] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.611 [2024-05-15 18:08:38.892328] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.611 [2024-05-15 18:08:38.893113] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:46.611 [2024-05-15 18:08:38.907449] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.611 18:08:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.869 [2024-05-15 18:08:39.195496] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:46.869 [2024-05-15 18:08:39.196069] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:46.869 [2024-05-15 18:08:39.196099] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:46.869 [2024-05-15 18:08:39.196110] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.869 [2024-05-15 18:08:39.204722] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.869 [2024-05-15 18:08:39.204755] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.869 [2024-05-15 18:08:39.211335] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.869 [2024-05-15 18:08:39.212218] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:46.869 [2024-05-15 18:08:39.220382] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.869 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.127 [2024-05-15 18:08:39.507525] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:47.127 [2024-05-15 18:08:39.508113] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:47.127 [2024-05-15 18:08:39.508138] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:47.127 [2024-05-15 18:08:39.508155] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.127 [2024-05-15 18:08:39.515390] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.127 [2024-05-15 18:08:39.515426] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.127 [2024-05-15 18:08:39.523328] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.127 [2024-05-15 18:08:39.524222] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:47.127 [2024-05-15 18:08:39.532375] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.127 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.386 [2024-05-15 18:08:39.819496] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:47.386 [2024-05-15 18:08:39.820058] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:47.386 [2024-05-15 18:08:39.820087] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:47.386 [2024-05-15 18:08:39.820099] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.386 [2024-05-15 18:08:39.828673] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.386 [2024-05-15 18:08:39.828703] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.386 [2024-05-15 18:08:39.835334] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.386 [2024-05-15 18:08:39.836186] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:47.386 [2024-05-15 18:08:39.848343] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:47.386 { 00:16:47.386 "ublk_device": "/dev/ublkb0", 00:16:47.386 "id": 0, 00:16:47.386 "queue_depth": 512, 00:16:47.386 "num_queues": 4, 00:16:47.386 "bdev_name": "Malloc0" 00:16:47.386 }, 00:16:47.386 { 00:16:47.386 "ublk_device": "/dev/ublkb1", 00:16:47.386 "id": 1, 00:16:47.386 "queue_depth": 512, 00:16:47.386 "num_queues": 4, 00:16:47.386 "bdev_name": "Malloc1" 00:16:47.386 }, 00:16:47.386 { 00:16:47.386 "ublk_device": "/dev/ublkb2", 00:16:47.386 "id": 2, 00:16:47.386 "queue_depth": 512, 00:16:47.386 "num_queues": 4, 00:16:47.386 "bdev_name": "Malloc2" 00:16:47.386 }, 00:16:47.386 { 00:16:47.386 "ublk_device": "/dev/ublkb3", 00:16:47.386 "id": 3, 00:16:47.386 "queue_depth": 512, 00:16:47.386 "num_queues": 4, 00:16:47.386 "bdev_name": "Malloc3" 00:16:47.386 } 00:16:47.386 ]' 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.386 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:47.644 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:47.644 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:47.644 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:47.644 18:08:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:47.644 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:47.644 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:47.644 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:47.644 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:47.905 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.164 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.423 18:08:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.423 [2024-05-15 18:08:40.908548] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.683 [2024-05-15 18:08:40.960399] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.683 [2024-05-15 18:08:40.961635] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.683 [2024-05-15 18:08:40.971373] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.683 [2024-05-15 18:08:40.971790] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:48.683 [2024-05-15 18:08:40.971813] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:48.683 18:08:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.683 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.683 18:08:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:48.683 18:08:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.683 18:08:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.683 [2024-05-15 18:08:40.975489] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.683 [2024-05-15 18:08:41.011394] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.683 [2024-05-15 18:08:41.012750] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.683 [2024-05-15 18:08:41.021502] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.683 [2024-05-15 18:08:41.021841] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:48.683 [2024-05-15 18:08:41.021867] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.683 [2024-05-15 18:08:41.036475] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.683 [2024-05-15 18:08:41.075372] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.683 [2024-05-15 18:08:41.080659] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.683 [2024-05-15 18:08:41.089350] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.683 [2024-05-15 18:08:41.089721] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:48.683 [2024-05-15 18:08:41.089752] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.683 [2024-05-15 18:08:41.097506] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.683 [2024-05-15 18:08:41.145370] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.683 [2024-05-15 18:08:41.146595] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.683 [2024-05-15 18:08:41.154324] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.683 [2024-05-15 18:08:41.154671] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:48.683 [2024-05-15 18:08:41.154697] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.683 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:48.942 [2024-05-15 18:08:41.418458] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:48.942 [2024-05-15 18:08:41.424573] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:48.942 [2024-05-15 18:08:41.424633] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:49.202 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:49.202 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.202 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:49.202 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.202 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.461 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.461 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.461 18:08:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:49.461 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.461 18:08:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.721 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.721 18:08:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.721 18:08:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:49.721 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.721 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.980 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.980 18:08:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.980 18:08:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:49.980 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.980 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:50.557 ************************************ 00:16:50.557 END TEST test_create_multi_ublk 00:16:50.557 ************************************ 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:50.557 00:16:50.557 real 0m4.354s 00:16:50.557 user 0m1.305s 00:16:50.557 sys 0m0.178s 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:50.557 18:08:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.557 18:08:42 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:50.557 18:08:42 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:50.557 18:08:42 ublk -- ublk/ublk.sh@130 -- # killprocess 75590 00:16:50.557 18:08:42 ublk -- common/autotest_common.sh@946 -- # '[' -z 75590 ']' 00:16:50.557 18:08:42 ublk -- common/autotest_common.sh@950 -- # kill -0 75590 00:16:50.557 18:08:42 ublk -- common/autotest_common.sh@951 -- # uname 00:16:50.557 18:08:42 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:50.557 18:08:42 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75590 00:16:50.557 killing process with pid 75590 00:16:50.557 18:08:43 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:50.557 18:08:43 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:50.557 18:08:43 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75590' 00:16:50.557 18:08:43 ublk -- common/autotest_common.sh@965 -- # kill 75590 00:16:50.557 18:08:43 ublk -- common/autotest_common.sh@970 -- # wait 75590 00:16:51.962 [2024-05-15 18:08:44.098461] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:51.962 [2024-05-15 18:08:44.098527] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:52.898 ************************************ 00:16:52.898 END TEST ublk 00:16:52.898 ************************************ 00:16:52.898 00:16:52.898 real 0m28.691s 00:16:52.898 user 0m43.206s 00:16:52.898 sys 0m8.280s 00:16:52.898 18:08:45 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:52.898 18:08:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.898 18:08:45 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:52.898 18:08:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:52.898 18:08:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:52.898 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:16:52.898 ************************************ 00:16:52.898 START TEST ublk_recovery 00:16:52.898 ************************************ 00:16:52.898 18:08:45 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:52.898 * Looking for test storage... 00:16:53.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:53.159 18:08:45 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:53.159 18:08:45 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:53.159 18:08:45 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:53.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.159 18:08:45 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75983 00:16:53.159 18:08:45 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.159 18:08:45 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:53.159 18:08:45 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75983 00:16:53.159 18:08:45 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 75983 ']' 00:16:53.159 18:08:45 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.159 18:08:45 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:53.159 18:08:45 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.159 18:08:45 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:53.159 18:08:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.159 [2024-05-15 18:08:45.510667] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:53.159 [2024-05-15 18:08:45.511571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75983 ] 00:16:53.418 [2024-05-15 18:08:45.683026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:53.676 [2024-05-15 18:08:45.930499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.676 [2024-05-15 18:08:45.930526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.244 18:08:46 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:54.244 18:08:46 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:16:54.245 18:08:46 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:54.245 18:08:46 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.245 18:08:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.245 [2024-05-15 18:08:46.743390] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:54.503 [2024-05-15 18:08:46.746401] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:54.503 18:08:46 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.503 18:08:46 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:54.503 18:08:46 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.503 18:08:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.503 malloc0 00:16:54.503 18:08:46 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.503 18:08:46 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:54.503 18:08:46 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.503 18:08:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.503 [2024-05-15 18:08:46.898556] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:54.503 [2024-05-15 18:08:46.898718] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:54.503 [2024-05-15 18:08:46.898740] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:54.503 [2024-05-15 18:08:46.898750] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:54.503 [2024-05-15 18:08:46.906380] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:54.503 [2024-05-15 18:08:46.906408] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:54.503 [2024-05-15 18:08:46.914342] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:54.503 [2024-05-15 18:08:46.914556] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:54.503 [2024-05-15 18:08:46.924439] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:54.503 1 00:16:54.503 18:08:46 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.503 18:08:46 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:55.440 18:08:47 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76019 00:16:55.440 18:08:47 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:55.440 18:08:47 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:55.699 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.699 fio-3.35 00:16:55.699 Starting 1 process 00:17:00.972 18:08:52 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75983 00:17:00.972 18:08:52 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:06.347 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75983 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:06.347 18:08:57 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76129 00:17:06.347 18:08:57 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.347 18:08:57 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76129 00:17:06.347 18:08:57 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 76129 ']' 00:17:06.347 18:08:57 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.347 18:08:57 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.347 18:08:57 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.347 18:08:57 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.347 18:08:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:06.347 18:08:57 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:06.347 [2024-05-15 18:08:58.084164] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:06.347 [2024-05-15 18:08:58.084370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76129 ] 00:17:06.347 [2024-05-15 18:08:58.261537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:06.347 [2024-05-15 18:08:58.565740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.347 [2024-05-15 18:08:58.565763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:17:07.283 18:08:59 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.283 [2024-05-15 18:08:59.436385] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:07.283 [2024-05-15 18:08:59.439250] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.283 18:08:59 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.283 malloc0 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.283 18:08:59 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.283 [2024-05-15 18:08:59.594565] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:07.283 [2024-05-15 18:08:59.594627] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:07.283 [2024-05-15 18:08:59.594647] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:07.283 [2024-05-15 18:08:59.602404] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:07.283 [2024-05-15 18:08:59.602466] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:07.283 1 00:17:07.283 [2024-05-15 18:08:59.602586] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:07.283 18:08:59 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.283 18:08:59 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76019 00:17:33.815 [2024-05-15 18:09:23.398392] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:33.815 [2024-05-15 18:09:23.406075] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:33.815 [2024-05-15 18:09:23.413769] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:33.815 [2024-05-15 18:09:23.413796] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:55.734 00:17:55.734 fio_test: (groupid=0, jobs=1): err= 0: pid=76028: Wed May 15 18:09:48 2024 00:17:55.734 read: IOPS=9722, BW=38.0MiB/s (39.8MB/s)(2279MiB/60003msec) 00:17:55.734 slat (nsec): min=1943, max=7468.5k, avg=6283.46, stdev=10186.35 00:17:55.734 clat (usec): min=1248, max=30491k, avg=6422.13, stdev=314278.08 00:17:55.734 lat (usec): min=1313, max=30491k, avg=6428.41, stdev=314278.08 00:17:55.734 clat percentiles (msec): 00:17:55.734 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:55.734 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:17:55.734 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:55.734 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:17:55.734 | 99.99th=[17113] 00:17:55.734 bw ( KiB/s): min=21320, max=85000, per=100.00%, avg=77904.76, stdev=10572.52, samples=59 00:17:55.734 iops : min= 5330, max=21250, avg=19476.19, stdev=2643.13, samples=59 00:17:55.734 write: IOPS=9713, BW=37.9MiB/s (39.8MB/s)(2277MiB/60003msec); 0 zone resets 00:17:55.734 slat (nsec): min=1949, max=4594.2k, avg=6361.20, stdev=6927.55 00:17:55.734 clat (usec): min=819, max=30491k, avg=6735.07, stdev=324395.62 00:17:55.734 lat (usec): min=824, max=30491k, avg=6741.43, stdev=324395.62 00:17:55.734 clat percentiles (msec): 00:17:55.734 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:17:55.734 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:17:55.734 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:55.734 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:17:55.734 | 99.99th=[17113] 00:17:55.734 bw ( KiB/s): min=21784, max=85408, per=100.00%, avg=77795.95, stdev=10546.32, samples=59 00:17:55.734 iops : min= 5446, max=21352, avg=19448.98, stdev=2636.58, samples=59 00:17:55.734 lat (usec) : 1000=0.01% 00:17:55.734 lat (msec) : 2=0.07%, 4=93.91%, 10=5.99%, 20=0.02%, >=2000=0.01% 00:17:55.734 cpu : usr=5.04%, sys=11.45%, ctx=37632, majf=0, minf=14 00:17:55.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:55.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.734 issued rwts: total=583356,582858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.734 00:17:55.734 Run status group 0 (all jobs): 00:17:55.734 READ: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=2279MiB (2389MB), run=60003-60003msec 00:17:55.734 WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=2277MiB (2387MB), run=60003-60003msec 00:17:55.734 00:17:55.734 Disk stats (read/write): 00:17:55.734 ublkb1: ios=581168/580500, merge=0/0, ticks=3689060/3804173, in_queue=7493233, util=99.92% 00:17:55.734 18:09:48 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:55.734 18:09:48 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.734 18:09:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.734 [2024-05-15 18:09:48.191513] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:55.734 [2024-05-15 18:09:48.224492] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:55.734 [2024-05-15 18:09:48.224952] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:55.734 [2024-05-15 18:09:48.232397] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:55.734 [2024-05-15 18:09:48.232557] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:55.734 [2024-05-15 18:09:48.232575] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.993 18:09:48 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.993 [2024-05-15 18:09:48.249503] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:55.993 [2024-05-15 18:09:48.255855] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:55.993 [2024-05-15 18:09:48.255912] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.993 18:09:48 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:55.993 18:09:48 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:55.993 18:09:48 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76129 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 76129 ']' 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 76129 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76129 00:17:55.993 killing process with pid 76129 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76129' 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@965 -- # kill 76129 00:17:55.993 18:09:48 ublk_recovery -- common/autotest_common.sh@970 -- # wait 76129 00:17:56.929 [2024-05-15 18:09:49.375003] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:56.929 [2024-05-15 18:09:49.375090] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:58.357 ************************************ 00:17:58.357 END TEST ublk_recovery 00:17:58.357 ************************************ 00:17:58.357 00:17:58.357 real 1m5.442s 00:17:58.357 user 1m51.302s 00:17:58.357 sys 0m18.342s 00:17:58.357 18:09:50 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.357 18:09:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.357 18:09:50 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:58.357 18:09:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.357 18:09:50 -- common/autotest_common.sh@10 -- # set +x 00:17:58.357 18:09:50 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@335 -- # '[' 1 -eq 1 ']' 00:17:58.357 18:09:50 -- spdk/autotest.sh@336 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:58.357 18:09:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:58.357 18:09:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.357 18:09:50 -- common/autotest_common.sh@10 -- # set +x 00:17:58.357 ************************************ 00:17:58.357 START TEST ftl 00:17:58.357 ************************************ 00:17:58.357 18:09:50 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:58.616 * Looking for test storage... 00:17:58.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.616 18:09:50 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:58.616 18:09:50 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:58.616 18:09:50 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.616 18:09:50 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.616 18:09:50 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:58.616 18:09:50 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:58.616 18:09:50 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.616 18:09:50 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:58.616 18:09:50 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:58.616 18:09:50 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.616 18:09:50 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.616 18:09:50 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:58.616 18:09:50 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:58.616 18:09:50 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.616 18:09:50 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.616 18:09:50 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:58.616 18:09:50 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:58.616 18:09:50 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.616 18:09:50 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.616 18:09:50 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:58.616 18:09:50 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:58.616 18:09:50 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.616 18:09:50 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.616 18:09:50 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.616 18:09:50 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.616 18:09:50 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:58.616 18:09:50 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:58.616 18:09:50 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.616 18:09:50 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.616 18:09:50 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.617 18:09:50 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:58.617 18:09:50 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:58.617 18:09:50 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:58.617 18:09:50 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:58.617 18:09:50 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:58.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:59.135 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.135 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.135 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.135 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:59.135 18:09:51 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76919 00:17:59.135 18:09:51 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:59.135 18:09:51 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76919 00:17:59.135 18:09:51 ftl -- common/autotest_common.sh@827 -- # '[' -z 76919 ']' 00:17:59.135 18:09:51 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.135 18:09:51 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.135 18:09:51 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.135 18:09:51 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.135 18:09:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:59.135 [2024-05-15 18:09:51.612767] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:59.135 [2024-05-15 18:09:51.613189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76919 ] 00:17:59.394 [2024-05-15 18:09:51.795412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.652 [2024-05-15 18:09:52.131322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.219 18:09:52 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.219 18:09:52 ftl -- common/autotest_common.sh@860 -- # return 0 00:18:00.219 18:09:52 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:00.478 18:09:52 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:01.413 18:09:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:01.413 18:09:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:01.980 18:09:54 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:01.980 18:09:54 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:01.980 18:09:54 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@50 -- # break 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:02.239 18:09:54 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:02.497 18:09:54 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:02.497 18:09:54 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:02.497 18:09:54 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:02.497 18:09:54 ftl -- ftl/ftl.sh@63 -- # break 00:18:02.497 18:09:54 ftl -- ftl/ftl.sh@66 -- # killprocess 76919 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@946 -- # '[' -z 76919 ']' 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@950 -- # kill -0 76919 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@951 -- # uname 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76919 00:18:02.497 killing process with pid 76919 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76919' 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@965 -- # kill 76919 00:18:02.497 18:09:54 ftl -- common/autotest_common.sh@970 -- # wait 76919 00:18:05.031 18:09:57 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:05.031 18:09:57 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:05.031 18:09:57 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:05.031 18:09:57 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:05.031 18:09:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:05.031 ************************************ 00:18:05.031 START TEST ftl_fio_basic 00:18:05.031 ************************************ 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:05.031 * Looking for test storage... 00:18:05.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77054 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77054 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 77054 ']' 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.031 18:09:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:05.031 [2024-05-15 18:09:57.460266] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:05.031 [2024-05-15 18:09:57.460444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77054 ] 00:18:05.290 [2024-05-15 18:09:57.636636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:05.549 [2024-05-15 18:09:57.905891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.549 [2024-05-15 18:09:57.905999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.550 [2024-05-15 18:09:57.906034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:06.486 18:09:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:18:06.762 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:07.021 { 00:18:07.021 "name": "nvme0n1", 00:18:07.021 "aliases": [ 00:18:07.021 "da2a5b98-37d8-4d19-ab44-b0dbcae152e7" 00:18:07.021 ], 00:18:07.021 "product_name": "NVMe disk", 00:18:07.021 "block_size": 4096, 00:18:07.021 "num_blocks": 1310720, 00:18:07.021 "uuid": "da2a5b98-37d8-4d19-ab44-b0dbcae152e7", 00:18:07.021 "assigned_rate_limits": { 00:18:07.021 "rw_ios_per_sec": 0, 00:18:07.021 "rw_mbytes_per_sec": 0, 00:18:07.021 "r_mbytes_per_sec": 0, 00:18:07.021 "w_mbytes_per_sec": 0 00:18:07.021 }, 00:18:07.021 "claimed": false, 00:18:07.021 "zoned": false, 00:18:07.021 "supported_io_types": { 00:18:07.021 "read": true, 00:18:07.021 "write": true, 00:18:07.021 "unmap": true, 00:18:07.021 "write_zeroes": true, 00:18:07.021 "flush": true, 00:18:07.021 "reset": true, 00:18:07.021 "compare": true, 00:18:07.021 "compare_and_write": false, 00:18:07.021 "abort": true, 00:18:07.021 "nvme_admin": true, 00:18:07.021 "nvme_io": true 00:18:07.021 }, 00:18:07.021 "driver_specific": { 00:18:07.021 "nvme": [ 00:18:07.021 { 00:18:07.021 "pci_address": "0000:00:11.0", 00:18:07.021 "trid": { 00:18:07.021 "trtype": "PCIe", 00:18:07.021 "traddr": "0000:00:11.0" 00:18:07.021 }, 00:18:07.021 "ctrlr_data": { 00:18:07.021 "cntlid": 0, 00:18:07.021 "vendor_id": "0x1b36", 00:18:07.021 "model_number": "QEMU NVMe Ctrl", 00:18:07.021 "serial_number": "12341", 00:18:07.021 "firmware_revision": "8.0.0", 00:18:07.021 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:07.021 "oacs": { 00:18:07.021 "security": 0, 00:18:07.021 "format": 1, 00:18:07.021 "firmware": 0, 00:18:07.021 "ns_manage": 1 00:18:07.021 }, 00:18:07.021 "multi_ctrlr": false, 00:18:07.021 "ana_reporting": false 00:18:07.021 }, 00:18:07.021 "vs": { 00:18:07.021 "nvme_version": "1.4" 00:18:07.021 }, 00:18:07.021 "ns_data": { 00:18:07.021 "id": 1, 00:18:07.021 "can_share": false 00:18:07.021 } 00:18:07.021 } 00:18:07.021 ], 00:18:07.021 "mp_policy": "active_passive" 00:18:07.021 } 00:18:07.021 } 00:18:07.021 ]' 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:07.021 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:07.279 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:07.279 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:07.537 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=25517134-dba2-4abf-90eb-9fcdaaf439fc 00:18:07.537 18:09:59 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 25517134-dba2-4abf-90eb-9fcdaaf439fc 00:18:07.796 18:10:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:18:07.797 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:08.055 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:08.055 { 00:18:08.055 "name": "35564738-a1cc-4c02-948c-b0f8aa7da511", 00:18:08.055 "aliases": [ 00:18:08.056 "lvs/nvme0n1p0" 00:18:08.056 ], 00:18:08.056 "product_name": "Logical Volume", 00:18:08.056 "block_size": 4096, 00:18:08.056 "num_blocks": 26476544, 00:18:08.056 "uuid": "35564738-a1cc-4c02-948c-b0f8aa7da511", 00:18:08.056 "assigned_rate_limits": { 00:18:08.056 "rw_ios_per_sec": 0, 00:18:08.056 "rw_mbytes_per_sec": 0, 00:18:08.056 "r_mbytes_per_sec": 0, 00:18:08.056 "w_mbytes_per_sec": 0 00:18:08.056 }, 00:18:08.056 "claimed": false, 00:18:08.056 "zoned": false, 00:18:08.056 "supported_io_types": { 00:18:08.056 "read": true, 00:18:08.056 "write": true, 00:18:08.056 "unmap": true, 00:18:08.056 "write_zeroes": true, 00:18:08.056 "flush": false, 00:18:08.056 "reset": true, 00:18:08.056 "compare": false, 00:18:08.056 "compare_and_write": false, 00:18:08.056 "abort": false, 00:18:08.056 "nvme_admin": false, 00:18:08.056 "nvme_io": false 00:18:08.056 }, 00:18:08.056 "driver_specific": { 00:18:08.056 "lvol": { 00:18:08.056 "lvol_store_uuid": "25517134-dba2-4abf-90eb-9fcdaaf439fc", 00:18:08.056 "base_bdev": "nvme0n1", 00:18:08.056 "thin_provision": true, 00:18:08.056 "num_allocated_clusters": 0, 00:18:08.056 "snapshot": false, 00:18:08.056 "clone": false, 00:18:08.056 "esnap_clone": false 00:18:08.056 } 00:18:08.056 } 00:18:08.056 } 00:18:08.056 ]' 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:08.056 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:08.623 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:08.623 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:08.623 18:10:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:08.623 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:08.623 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:08.623 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:18:08.623 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:18:08.624 18:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:08.882 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:08.883 { 00:18:08.883 "name": "35564738-a1cc-4c02-948c-b0f8aa7da511", 00:18:08.883 "aliases": [ 00:18:08.883 "lvs/nvme0n1p0" 00:18:08.883 ], 00:18:08.883 "product_name": "Logical Volume", 00:18:08.883 "block_size": 4096, 00:18:08.883 "num_blocks": 26476544, 00:18:08.883 "uuid": "35564738-a1cc-4c02-948c-b0f8aa7da511", 00:18:08.883 "assigned_rate_limits": { 00:18:08.883 "rw_ios_per_sec": 0, 00:18:08.883 "rw_mbytes_per_sec": 0, 00:18:08.883 "r_mbytes_per_sec": 0, 00:18:08.883 "w_mbytes_per_sec": 0 00:18:08.883 }, 00:18:08.883 "claimed": false, 00:18:08.883 "zoned": false, 00:18:08.883 "supported_io_types": { 00:18:08.883 "read": true, 00:18:08.883 "write": true, 00:18:08.883 "unmap": true, 00:18:08.883 "write_zeroes": true, 00:18:08.883 "flush": false, 00:18:08.883 "reset": true, 00:18:08.883 "compare": false, 00:18:08.883 "compare_and_write": false, 00:18:08.883 "abort": false, 00:18:08.883 "nvme_admin": false, 00:18:08.883 "nvme_io": false 00:18:08.883 }, 00:18:08.883 "driver_specific": { 00:18:08.883 "lvol": { 00:18:08.883 "lvol_store_uuid": "25517134-dba2-4abf-90eb-9fcdaaf439fc", 00:18:08.883 "base_bdev": "nvme0n1", 00:18:08.883 "thin_provision": true, 00:18:08.883 "num_allocated_clusters": 0, 00:18:08.883 "snapshot": false, 00:18:08.883 "clone": false, 00:18:08.883 "esnap_clone": false 00:18:08.883 } 00:18:08.883 } 00:18:08.883 } 00:18:08.883 ]' 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:08.883 18:10:01 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:09.142 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:18:09.142 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35564738-a1cc-4c02-948c-b0f8aa7da511 00:18:09.400 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:09.400 { 00:18:09.400 "name": "35564738-a1cc-4c02-948c-b0f8aa7da511", 00:18:09.400 "aliases": [ 00:18:09.400 "lvs/nvme0n1p0" 00:18:09.400 ], 00:18:09.400 "product_name": "Logical Volume", 00:18:09.400 "block_size": 4096, 00:18:09.400 "num_blocks": 26476544, 00:18:09.400 "uuid": "35564738-a1cc-4c02-948c-b0f8aa7da511", 00:18:09.400 "assigned_rate_limits": { 00:18:09.400 "rw_ios_per_sec": 0, 00:18:09.400 "rw_mbytes_per_sec": 0, 00:18:09.400 "r_mbytes_per_sec": 0, 00:18:09.400 "w_mbytes_per_sec": 0 00:18:09.400 }, 00:18:09.400 "claimed": false, 00:18:09.400 "zoned": false, 00:18:09.400 "supported_io_types": { 00:18:09.400 "read": true, 00:18:09.400 "write": true, 00:18:09.400 "unmap": true, 00:18:09.400 "write_zeroes": true, 00:18:09.400 "flush": false, 00:18:09.400 "reset": true, 00:18:09.400 "compare": false, 00:18:09.400 "compare_and_write": false, 00:18:09.400 "abort": false, 00:18:09.400 "nvme_admin": false, 00:18:09.400 "nvme_io": false 00:18:09.400 }, 00:18:09.400 "driver_specific": { 00:18:09.400 "lvol": { 00:18:09.400 "lvol_store_uuid": "25517134-dba2-4abf-90eb-9fcdaaf439fc", 00:18:09.400 "base_bdev": "nvme0n1", 00:18:09.400 "thin_provision": true, 00:18:09.400 "num_allocated_clusters": 0, 00:18:09.400 "snapshot": false, 00:18:09.400 "clone": false, 00:18:09.400 "esnap_clone": false 00:18:09.400 } 00:18:09.400 } 00:18:09.400 } 00:18:09.400 ]' 00:18:09.400 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:09.400 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:18:09.400 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:09.659 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:09.659 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:09.659 18:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:18:09.659 18:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:09.659 18:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:09.659 18:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 35564738-a1cc-4c02-948c-b0f8aa7da511 -c nvc0n1p0 --l2p_dram_limit 60 00:18:09.659 [2024-05-15 18:10:02.149841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.659 [2024-05-15 18:10:02.150393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:09.659 [2024-05-15 18:10:02.150548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:09.659 [2024-05-15 18:10:02.150639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.659 [2024-05-15 18:10:02.150816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.659 [2024-05-15 18:10:02.150945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.659 [2024-05-15 18:10:02.151050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:09.659 [2024-05-15 18:10:02.151126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.659 [2024-05-15 18:10:02.151233] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:09.659 [2024-05-15 18:10:02.152350] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:09.659 [2024-05-15 18:10:02.152478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.659 [2024-05-15 18:10:02.152555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.659 [2024-05-15 18:10:02.152636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:18:09.659 [2024-05-15 18:10:02.152709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.659 [2024-05-15 18:10:02.152877] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 639d4ca4-11f1-4129-969f-de1c007f7d48 00:18:09.659 [2024-05-15 18:10:02.154783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.659 [2024-05-15 18:10:02.154900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:09.659 [2024-05-15 18:10:02.154984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:09.659 [2024-05-15 18:10:02.155013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.919 [2024-05-15 18:10:02.164947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.919 [2024-05-15 18:10:02.165049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.919 [2024-05-15 18:10:02.165068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.808 ms 00:18:09.919 [2024-05-15 18:10:02.165083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.919 [2024-05-15 18:10:02.165234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.919 [2024-05-15 18:10:02.165275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.919 [2024-05-15 18:10:02.165314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:09.919 [2024-05-15 18:10:02.165335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.919 [2024-05-15 18:10:02.165437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.919 [2024-05-15 18:10:02.165465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:09.919 [2024-05-15 18:10:02.165479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:09.919 [2024-05-15 18:10:02.165494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.919 [2024-05-15 18:10:02.165546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:09.919 [2024-05-15 18:10:02.170848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.919 [2024-05-15 18:10:02.170901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.919 [2024-05-15 18:10:02.170956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.317 ms 00:18:09.919 [2024-05-15 18:10:02.170970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.919 [2024-05-15 18:10:02.171025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.919 [2024-05-15 18:10:02.171044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:09.919 [2024-05-15 18:10:02.171060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:09.919 [2024-05-15 18:10:02.171073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.919 [2024-05-15 18:10:02.171132] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:09.919 [2024-05-15 18:10:02.171286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:09.919 [2024-05-15 18:10:02.171337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:09.919 [2024-05-15 18:10:02.171355] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:09.919 [2024-05-15 18:10:02.171382] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:09.920 [2024-05-15 18:10:02.171397] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:09.920 [2024-05-15 18:10:02.171413] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:09.920 [2024-05-15 18:10:02.171425] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:09.920 [2024-05-15 18:10:02.171441] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:09.920 [2024-05-15 18:10:02.171453] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:09.920 [2024-05-15 18:10:02.171468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.171481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:09.920 [2024-05-15 18:10:02.171496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:18:09.920 [2024-05-15 18:10:02.171514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.171600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.171616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:09.920 [2024-05-15 18:10:02.171631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:09.920 [2024-05-15 18:10:02.171643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.171763] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:09.920 [2024-05-15 18:10:02.171782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:09.920 [2024-05-15 18:10:02.171800] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.920 [2024-05-15 18:10:02.171813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.920 [2024-05-15 18:10:02.171831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:09.920 [2024-05-15 18:10:02.171843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:09.920 [2024-05-15 18:10:02.171864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:09.920 [2024-05-15 18:10:02.171876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:09.920 [2024-05-15 18:10:02.171890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:09.920 [2024-05-15 18:10:02.171901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.920 [2024-05-15 18:10:02.171916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:09.920 [2024-05-15 18:10:02.171928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:09.920 [2024-05-15 18:10:02.171942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.920 [2024-05-15 18:10:02.171954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:09.920 [2024-05-15 18:10:02.171976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:09.920 [2024-05-15 18:10:02.171987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:09.920 [2024-05-15 18:10:02.172019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:09.920 [2024-05-15 18:10:02.172035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:09.920 [2024-05-15 18:10:02.172062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:09.920 [2024-05-15 18:10:02.172074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:09.920 [2024-05-15 18:10:02.172087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:09.920 [2024-05-15 18:10:02.172099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.920 [2024-05-15 18:10:02.172123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:09.920 [2024-05-15 18:10:02.172137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.920 [2024-05-15 18:10:02.172162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:09.920 [2024-05-15 18:10:02.172173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.920 [2024-05-15 18:10:02.172198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:09.920 [2024-05-15 18:10:02.172211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.920 [2024-05-15 18:10:02.172238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:09.920 [2024-05-15 18:10:02.172250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.920 [2024-05-15 18:10:02.172276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:09.920 [2024-05-15 18:10:02.172290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:09.920 [2024-05-15 18:10:02.172316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.920 [2024-05-15 18:10:02.172353] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:09.920 [2024-05-15 18:10:02.172367] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:09.920 [2024-05-15 18:10:02.172382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.920 [2024-05-15 18:10:02.172394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.920 [2024-05-15 18:10:02.172409] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:09.920 [2024-05-15 18:10:02.172421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:09.920 [2024-05-15 18:10:02.172435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:09.920 [2024-05-15 18:10:02.172446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:09.920 [2024-05-15 18:10:02.172460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:09.920 [2024-05-15 18:10:02.172477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:09.920 [2024-05-15 18:10:02.172496] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:09.920 [2024-05-15 18:10:02.172511] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.920 [2024-05-15 18:10:02.172529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:09.920 [2024-05-15 18:10:02.172541] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:09.920 [2024-05-15 18:10:02.172557] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:09.920 [2024-05-15 18:10:02.172570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:09.920 [2024-05-15 18:10:02.172587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:09.920 [2024-05-15 18:10:02.172599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:09.920 [2024-05-15 18:10:02.172614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:09.920 [2024-05-15 18:10:02.172627] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:09.920 [2024-05-15 18:10:02.172641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:09.920 [2024-05-15 18:10:02.172653] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:09.920 [2024-05-15 18:10:02.172670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:09.920 [2024-05-15 18:10:02.172683] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:09.920 [2024-05-15 18:10:02.172699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:09.920 [2024-05-15 18:10:02.172711] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:09.920 [2024-05-15 18:10:02.172730] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.920 [2024-05-15 18:10:02.172744] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:09.920 [2024-05-15 18:10:02.172759] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:09.920 [2024-05-15 18:10:02.172772] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:09.920 [2024-05-15 18:10:02.172788] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:09.920 [2024-05-15 18:10:02.172801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.172817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:09.920 [2024-05-15 18:10:02.172831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:18:09.920 [2024-05-15 18:10:02.172849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.195066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.195149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.920 [2024-05-15 18:10:02.195186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.135 ms 00:18:09.920 [2024-05-15 18:10:02.195204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.195338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.195361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:09.920 [2024-05-15 18:10:02.195376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:09.920 [2024-05-15 18:10:02.195392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.242938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.243030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.920 [2024-05-15 18:10:02.243052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.458 ms 00:18:09.920 [2024-05-15 18:10:02.243068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.243144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.243164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.920 [2024-05-15 18:10:02.243178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:09.920 [2024-05-15 18:10:02.243193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.243902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.243946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.920 [2024-05-15 18:10:02.243962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:18:09.920 [2024-05-15 18:10:02.244001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.244171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.244207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.920 [2024-05-15 18:10:02.244224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:18:09.920 [2024-05-15 18:10:02.244243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.275952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.276023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.920 [2024-05-15 18:10:02.276045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.664 ms 00:18:09.920 [2024-05-15 18:10:02.276061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.291135] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:09.920 [2024-05-15 18:10:02.312396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.920 [2024-05-15 18:10:02.312480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:09.920 [2024-05-15 18:10:02.312520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.120 ms 00:18:09.920 [2024-05-15 18:10:02.312534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.920 [2024-05-15 18:10:02.379113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.921 [2024-05-15 18:10:02.379202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:09.921 [2024-05-15 18:10:02.379228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.505 ms 00:18:09.921 [2024-05-15 18:10:02.379242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.921 [2024-05-15 18:10:02.379353] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:09.921 [2024-05-15 18:10:02.379382] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:14.124 [2024-05-15 18:10:05.731645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.731723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:14.124 [2024-05-15 18:10:05.731758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3352.273 ms 00:18:14.124 [2024-05-15 18:10:05.731777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.732046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.732078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:14.124 [2024-05-15 18:10:05.732137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:18:14.124 [2024-05-15 18:10:05.732152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.762912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.762958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:14.124 [2024-05-15 18:10:05.762986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.673 ms 00:18:14.124 [2024-05-15 18:10:05.762999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.793191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.793251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:14.124 [2024-05-15 18:10:05.793274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.132 ms 00:18:14.124 [2024-05-15 18:10:05.793287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.793743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.793778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:14.124 [2024-05-15 18:10:05.793798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:18:14.124 [2024-05-15 18:10:05.793811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.874427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.874495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:14.124 [2024-05-15 18:10:05.874520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.536 ms 00:18:14.124 [2024-05-15 18:10:05.874539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.907530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.907590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:14.124 [2024-05-15 18:10:05.907614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.918 ms 00:18:14.124 [2024-05-15 18:10:05.907628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.912004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.912045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:14.124 [2024-05-15 18:10:05.912073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.311 ms 00:18:14.124 [2024-05-15 18:10:05.912086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.943269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.943329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:14.124 [2024-05-15 18:10:05.943353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.090 ms 00:18:14.124 [2024-05-15 18:10:05.943367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.943440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.943463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:14.124 [2024-05-15 18:10:05.943479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:14.124 [2024-05-15 18:10:05.943492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.943662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.124 [2024-05-15 18:10:05.943681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:14.124 [2024-05-15 18:10:05.943697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:14.124 [2024-05-15 18:10:05.943720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.124 [2024-05-15 18:10:05.945081] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3794.695 ms, result 0 00:18:14.124 { 00:18:14.124 "name": "ftl0", 00:18:14.124 "uuid": "639d4ca4-11f1-4129-969f-de1c007f7d48" 00:18:14.124 } 00:18:14.124 18:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:14.124 18:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:18:14.124 18:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:14.124 18:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:18:14.124 18:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:14.124 18:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:14.124 18:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:14.124 18:10:06 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:14.124 [ 00:18:14.124 { 00:18:14.124 "name": "ftl0", 00:18:14.124 "aliases": [ 00:18:14.124 "639d4ca4-11f1-4129-969f-de1c007f7d48" 00:18:14.124 ], 00:18:14.124 "product_name": "FTL disk", 00:18:14.124 "block_size": 4096, 00:18:14.124 "num_blocks": 20971520, 00:18:14.124 "uuid": "639d4ca4-11f1-4129-969f-de1c007f7d48", 00:18:14.124 "assigned_rate_limits": { 00:18:14.124 "rw_ios_per_sec": 0, 00:18:14.124 "rw_mbytes_per_sec": 0, 00:18:14.124 "r_mbytes_per_sec": 0, 00:18:14.124 "w_mbytes_per_sec": 0 00:18:14.124 }, 00:18:14.124 "claimed": false, 00:18:14.124 "zoned": false, 00:18:14.124 "supported_io_types": { 00:18:14.124 "read": true, 00:18:14.124 "write": true, 00:18:14.124 "unmap": true, 00:18:14.124 "write_zeroes": true, 00:18:14.124 "flush": true, 00:18:14.124 "reset": false, 00:18:14.124 "compare": false, 00:18:14.124 "compare_and_write": false, 00:18:14.124 "abort": false, 00:18:14.124 "nvme_admin": false, 00:18:14.124 "nvme_io": false 00:18:14.124 }, 00:18:14.124 "driver_specific": { 00:18:14.124 "ftl": { 00:18:14.124 "base_bdev": "35564738-a1cc-4c02-948c-b0f8aa7da511", 00:18:14.124 "cache": "nvc0n1p0" 00:18:14.124 } 00:18:14.124 } 00:18:14.124 } 00:18:14.124 ] 00:18:14.124 18:10:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:18:14.124 18:10:06 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:14.124 18:10:06 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:14.383 18:10:06 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:14.383 18:10:06 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:14.642 [2024-05-15 18:10:06.997979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:06.998082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:14.642 [2024-05-15 18:10:06.998109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:14.642 [2024-05-15 18:10:06.998131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:06.998185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:14.642 [2024-05-15 18:10:07.001887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.001938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:14.642 [2024-05-15 18:10:07.001973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:18:14.642 [2024-05-15 18:10:07.001986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.002493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.002526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:14.642 [2024-05-15 18:10:07.002545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:18:14.642 [2024-05-15 18:10:07.002561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.005744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.005772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:14.642 [2024-05-15 18:10:07.005833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:18:14.642 [2024-05-15 18:10:07.005846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.012494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.012543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:14.642 [2024-05-15 18:10:07.012565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.592 ms 00:18:14.642 [2024-05-15 18:10:07.012577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.043113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.043171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:14.642 [2024-05-15 18:10:07.043209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.413 ms 00:18:14.642 [2024-05-15 18:10:07.043221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.061712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.061776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:14.642 [2024-05-15 18:10:07.061814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.430 ms 00:18:14.642 [2024-05-15 18:10:07.061828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.062066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.062102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:14.642 [2024-05-15 18:10:07.062128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:18:14.642 [2024-05-15 18:10:07.062141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.091984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.092042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:14.642 [2024-05-15 18:10:07.092065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.798 ms 00:18:14.642 [2024-05-15 18:10:07.092078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.642 [2024-05-15 18:10:07.120668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.642 [2024-05-15 18:10:07.120724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:14.642 [2024-05-15 18:10:07.120761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.529 ms 00:18:14.642 [2024-05-15 18:10:07.120773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.903 [2024-05-15 18:10:07.150424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.903 [2024-05-15 18:10:07.150467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:14.903 [2024-05-15 18:10:07.150487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.592 ms 00:18:14.903 [2024-05-15 18:10:07.150499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.903 [2024-05-15 18:10:07.178947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.903 [2024-05-15 18:10:07.179003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:14.903 [2024-05-15 18:10:07.179040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.303 ms 00:18:14.903 [2024-05-15 18:10:07.179052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.903 [2024-05-15 18:10:07.179113] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:14.903 [2024-05-15 18:10:07.179166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:14.903 [2024-05-15 18:10:07.179945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.179960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.179973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.179988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:14.904 [2024-05-15 18:10:07.180727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:14.904 [2024-05-15 18:10:07.180742] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 639d4ca4-11f1-4129-969f-de1c007f7d48 00:18:14.904 [2024-05-15 18:10:07.180755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:14.904 [2024-05-15 18:10:07.180779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:14.904 [2024-05-15 18:10:07.180792] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:14.904 [2024-05-15 18:10:07.180806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:14.904 [2024-05-15 18:10:07.180818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:14.904 [2024-05-15 18:10:07.180833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:14.904 [2024-05-15 18:10:07.180845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:14.904 [2024-05-15 18:10:07.180859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:14.904 [2024-05-15 18:10:07.180870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:14.904 [2024-05-15 18:10:07.180886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.904 [2024-05-15 18:10:07.180898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:14.904 [2024-05-15 18:10:07.180916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:18:14.904 [2024-05-15 18:10:07.180929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.904 [2024-05-15 18:10:07.197555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.904 [2024-05-15 18:10:07.197607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:14.904 [2024-05-15 18:10:07.197644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.550 ms 00:18:14.904 [2024-05-15 18:10:07.197657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.904 [2024-05-15 18:10:07.197946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.904 [2024-05-15 18:10:07.197977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:14.904 [2024-05-15 18:10:07.197995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:14.904 [2024-05-15 18:10:07.198015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.904 [2024-05-15 18:10:07.256716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.904 [2024-05-15 18:10:07.256795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.904 [2024-05-15 18:10:07.256833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.904 [2024-05-15 18:10:07.256846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.904 [2024-05-15 18:10:07.256938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.904 [2024-05-15 18:10:07.256970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.904 [2024-05-15 18:10:07.256989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.904 [2024-05-15 18:10:07.257001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.904 [2024-05-15 18:10:07.257145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.904 [2024-05-15 18:10:07.257175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.904 [2024-05-15 18:10:07.257194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.904 [2024-05-15 18:10:07.257207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.904 [2024-05-15 18:10:07.257247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.904 [2024-05-15 18:10:07.257268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.905 [2024-05-15 18:10:07.257285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.905 [2024-05-15 18:10:07.257312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.905 [2024-05-15 18:10:07.375649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.905 [2024-05-15 18:10:07.375744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.905 [2024-05-15 18:10:07.375811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.905 [2024-05-15 18:10:07.375825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.415164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.164 [2024-05-15 18:10:07.415229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.164 [2024-05-15 18:10:07.415270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.164 [2024-05-15 18:10:07.415283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.415423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.164 [2024-05-15 18:10:07.415474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.164 [2024-05-15 18:10:07.415490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.164 [2024-05-15 18:10:07.415503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.415589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.164 [2024-05-15 18:10:07.415607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.164 [2024-05-15 18:10:07.415622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.164 [2024-05-15 18:10:07.415635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.415801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.164 [2024-05-15 18:10:07.415831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.164 [2024-05-15 18:10:07.415854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.164 [2024-05-15 18:10:07.415876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.415949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.164 [2024-05-15 18:10:07.415968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:15.164 [2024-05-15 18:10:07.415987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.164 [2024-05-15 18:10:07.415999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.416066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.164 [2024-05-15 18:10:07.416088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.164 [2024-05-15 18:10:07.416109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.164 [2024-05-15 18:10:07.416121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.416198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.164 [2024-05-15 18:10:07.416215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.164 [2024-05-15 18:10:07.416231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.164 [2024-05-15 18:10:07.416243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.164 [2024-05-15 18:10:07.416453] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 418.426 ms, result 0 00:18:15.164 true 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77054 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 77054 ']' 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 77054 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77054 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77054' 00:18:15.164 killing process with pid 77054 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 77054 00:18:15.164 18:10:07 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 77054 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:20.436 18:10:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:20.436 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:20.436 fio-3.35 00:18:20.436 Starting 1 thread 00:18:25.764 00:18:25.764 test: (groupid=0, jobs=1): err= 0: pid=77272: Wed May 15 18:10:17 2024 00:18:25.764 read: IOPS=933, BW=62.0MiB/s (65.0MB/s)(255MiB/4106msec) 00:18:25.764 slat (nsec): min=5633, max=51658, avg=7638.91, stdev=3235.65 00:18:25.764 clat (usec): min=320, max=847, avg=473.75, stdev=48.03 00:18:25.764 lat (usec): min=332, max=863, avg=481.39, stdev=48.75 00:18:25.764 clat percentiles (usec): 00:18:25.764 | 1.00th=[ 371], 5.00th=[ 412], 10.00th=[ 433], 20.00th=[ 441], 00:18:25.764 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 474], 00:18:25.764 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 562], 00:18:25.764 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 709], 99.95th=[ 758], 00:18:25.764 | 99.99th=[ 848] 00:18:25.764 write: IOPS=940, BW=62.4MiB/s (65.5MB/s)(256MiB/4102msec); 0 zone resets 00:18:25.764 slat (nsec): min=19595, max=84233, avg=24290.40, stdev=5543.42 00:18:25.764 clat (usec): min=387, max=1053, avg=548.29, stdev=62.49 00:18:25.764 lat (usec): min=410, max=1075, avg=572.58, stdev=63.09 00:18:25.764 clat percentiles (usec): 00:18:25.764 | 1.00th=[ 441], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 498], 00:18:25.764 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:18:25.764 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 619], 95.00th=[ 644], 00:18:25.764 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 930], 99.95th=[ 979], 00:18:25.764 | 99.99th=[ 1057] 00:18:25.764 bw ( KiB/s): min=61336, max=65688, per=100.00%, avg=63971.00, stdev=1464.65, samples=8 00:18:25.764 iops : min= 902, max= 966, avg=940.75, stdev=21.54, samples=8 00:18:25.764 lat (usec) : 500=47.47%, 750=51.66%, 1000=0.86% 00:18:25.764 lat (msec) : 2=0.01% 00:18:25.764 cpu : usr=99.15%, sys=0.12%, ctx=9, majf=0, minf=1171 00:18:25.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.764 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.764 00:18:25.764 Run status group 0 (all jobs): 00:18:25.764 READ: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=255MiB (267MB), run=4106-4106msec 00:18:25.764 WRITE: bw=62.4MiB/s (65.5MB/s), 62.4MiB/s-62.4MiB/s (65.5MB/s-65.5MB/s), io=256MiB (269MB), run=4102-4102msec 00:18:27.232 ----------------------------------------------------- 00:18:27.232 Suppressions used: 00:18:27.232 count bytes template 00:18:27.232 1 5 /usr/src/fio/parse.c 00:18:27.232 1 8 libtcmalloc_minimal.so 00:18:27.232 1 904 libcrypto.so 00:18:27.232 ----------------------------------------------------- 00:18:27.232 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:27.232 18:10:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:27.491 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:27.491 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:27.491 fio-3.35 00:18:27.491 Starting 2 threads 00:18:59.575 00:18:59.575 first_half: (groupid=0, jobs=1): err= 0: pid=77376: Wed May 15 18:10:49 2024 00:18:59.575 read: IOPS=2294, BW=9180KiB/s (9400kB/s)(255MiB/28428msec) 00:18:59.575 slat (nsec): min=4905, max=37085, avg=7445.07, stdev=1743.62 00:18:59.575 clat (usec): min=950, max=367742, avg=41107.25, stdev=22472.95 00:18:59.575 lat (usec): min=958, max=367749, avg=41114.70, stdev=22473.12 00:18:59.575 clat percentiles (msec): 00:18:59.575 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 38], 20.00th=[ 38], 00:18:59.575 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:18:59.575 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 51], 00:18:59.575 | 99.00th=[ 174], 99.50th=[ 190], 99.90th=[ 262], 99.95th=[ 317], 00:18:59.575 | 99.99th=[ 359] 00:18:59.575 write: IOPS=2688, BW=10.5MiB/s (11.0MB/s)(256MiB/24375msec); 0 zone resets 00:18:59.575 slat (usec): min=5, max=175, avg= 9.61, stdev= 5.10 00:18:59.575 clat (usec): min=472, max=135314, avg=14558.86, stdev=25049.05 00:18:59.575 lat (usec): min=489, max=135322, avg=14568.47, stdev=25049.31 00:18:59.575 clat percentiles (usec): 00:18:59.575 | 1.00th=[ 930], 5.00th=[ 1237], 10.00th=[ 1467], 20.00th=[ 1958], 00:18:59.575 | 30.00th=[ 3425], 40.00th=[ 4817], 50.00th=[ 5997], 60.00th=[ 7046], 00:18:59.575 | 70.00th=[ 8455], 80.00th=[ 13566], 90.00th=[ 40633], 95.00th=[ 88605], 00:18:59.575 | 99.00th=[100140], 99.50th=[102237], 99.90th=[119014], 99.95th=[127402], 00:18:59.575 | 99.99th=[133694] 00:18:59.575 bw ( KiB/s): min= 912, max=39888, per=84.05%, avg=18078.90, stdev=9249.70, samples=29 00:18:59.575 iops : min= 228, max= 9972, avg=4519.72, stdev=2312.43, samples=29 00:18:59.575 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.75% 00:18:59.575 lat (msec) : 2=9.63%, 4=7.64%, 10=19.51%, 20=8.88%, 50=46.39% 00:18:59.575 lat (msec) : 100=5.25%, 250=1.86%, 500=0.06% 00:18:59.575 cpu : usr=99.05%, sys=0.27%, ctx=41, majf=0, minf=5616 00:18:59.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:59.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.575 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.575 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.575 second_half: (groupid=0, jobs=1): err= 0: pid=77377: Wed May 15 18:10:49 2024 00:18:59.575 read: IOPS=2309, BW=9238KiB/s (9460kB/s)(254MiB/28206msec) 00:18:59.575 slat (usec): min=4, max=112, avg= 7.30, stdev= 1.83 00:18:59.575 clat (usec): min=841, max=375320, avg=41922.96, stdev=20010.97 00:18:59.575 lat (usec): min=848, max=375334, avg=41930.26, stdev=20011.14 00:18:59.575 clat percentiles (msec): 00:18:59.575 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 38], 00:18:59.575 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 39], 00:18:59.575 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 53], 00:18:59.575 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 201], 99.95th=[ 209], 00:18:59.575 | 99.99th=[ 368] 00:18:59.575 write: IOPS=3238, BW=12.6MiB/s (13.3MB/s)(256MiB/20237msec); 0 zone resets 00:18:59.575 slat (usec): min=5, max=343, avg= 9.60, stdev= 5.51 00:18:59.575 clat (usec): min=496, max=135821, avg=13384.16, stdev=24347.43 00:18:59.575 lat (usec): min=509, max=135829, avg=13393.76, stdev=24347.56 00:18:59.575 clat percentiles (usec): 00:18:59.575 | 1.00th=[ 1020], 5.00th=[ 1303], 10.00th=[ 1483], 20.00th=[ 1762], 00:18:59.575 | 30.00th=[ 2180], 40.00th=[ 3687], 50.00th=[ 5407], 60.00th=[ 6980], 00:18:59.575 | 70.00th=[ 8586], 80.00th=[ 13042], 90.00th=[ 17957], 95.00th=[ 87557], 00:18:59.575 | 99.00th=[100140], 99.50th=[101188], 99.90th=[120062], 99.95th=[128451], 00:18:59.575 | 99.99th=[132645] 00:18:59.575 bw ( KiB/s): min= 1000, max=40624, per=100.00%, avg=24966.10, stdev=11291.10, samples=21 00:18:59.575 iops : min= 250, max=10156, avg=6241.52, stdev=2822.77, samples=21 00:18:59.575 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.38% 00:18:59.575 lat (msec) : 2=13.03%, 4=8.03%, 10=15.93%, 20=9.06%, 50=46.05% 00:18:59.575 lat (msec) : 100=5.58%, 250=1.87%, 500=0.01% 00:18:59.575 cpu : usr=99.10%, sys=0.22%, ctx=121, majf=0, minf=5507 00:18:59.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:59.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.575 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.575 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.575 00:18:59.575 Run status group 0 (all jobs): 00:18:59.575 READ: bw=17.9MiB/s (18.8MB/s), 9180KiB/s-9238KiB/s (9400kB/s-9460kB/s), io=509MiB (534MB), run=28206-28428msec 00:18:59.575 WRITE: bw=21.0MiB/s (22.0MB/s), 10.5MiB/s-12.6MiB/s (11.0MB/s-13.3MB/s), io=512MiB (537MB), run=20237-24375msec 00:18:59.575 ----------------------------------------------------- 00:18:59.575 Suppressions used: 00:18:59.575 count bytes template 00:18:59.575 2 10 /usr/src/fio/parse.c 00:18:59.575 3 288 /usr/src/fio/iolog.c 00:18:59.575 1 8 libtcmalloc_minimal.so 00:18:59.575 1 904 libcrypto.so 00:18:59.575 ----------------------------------------------------- 00:18:59.575 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:59.575 18:10:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:59.833 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:59.833 fio-3.35 00:18:59.833 Starting 1 thread 00:19:17.910 00:19:17.910 test: (groupid=0, jobs=1): err= 0: pid=77734: Wed May 15 18:11:09 2024 00:19:17.910 read: IOPS=6476, BW=25.3MiB/s (26.5MB/s)(255MiB/10067msec) 00:19:17.910 slat (nsec): min=4775, max=85800, avg=6762.19, stdev=1666.50 00:19:17.910 clat (usec): min=791, max=39097, avg=19752.32, stdev=1056.26 00:19:17.910 lat (usec): min=796, max=39103, avg=19759.09, stdev=1056.25 00:19:17.910 clat percentiles (usec): 00:19:17.910 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19006], 20.00th=[19268], 00:19:17.910 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:19:17.910 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20317], 95.00th=[20841], 00:19:17.910 | 99.00th=[23200], 99.50th=[23725], 99.90th=[29492], 99.95th=[34341], 00:19:17.910 | 99.99th=[38536] 00:19:17.910 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(256MiB/5657msec); 0 zone resets 00:19:17.910 slat (usec): min=5, max=333, avg= 9.34, stdev= 4.69 00:19:17.910 clat (usec): min=638, max=60881, avg=10988.44, stdev=13792.08 00:19:17.910 lat (usec): min=647, max=60889, avg=10997.79, stdev=13792.11 00:19:17.910 clat percentiles (usec): 00:19:17.910 | 1.00th=[ 979], 5.00th=[ 1188], 10.00th=[ 1319], 20.00th=[ 1516], 00:19:17.910 | 30.00th=[ 1729], 40.00th=[ 2278], 50.00th=[ 7111], 60.00th=[ 8291], 00:19:17.910 | 70.00th=[ 9503], 80.00th=[11731], 90.00th=[39060], 95.00th=[43779], 00:19:17.910 | 99.00th=[48497], 99.50th=[50594], 99.90th=[54264], 99.95th=[56361], 00:19:17.910 | 99.99th=[58459] 00:19:17.910 bw ( KiB/s): min=12648, max=66032, per=94.28%, avg=43690.67, stdev=13384.80, samples=12 00:19:17.910 iops : min= 3162, max=16508, avg=10922.67, stdev=3346.20, samples=12 00:19:17.910 lat (usec) : 750=0.02%, 1000=0.61% 00:19:17.910 lat (msec) : 2=18.18%, 4=2.13%, 10=15.82%, 20=45.70%, 50=17.23% 00:19:17.910 lat (msec) : 100=0.31% 00:19:17.910 cpu : usr=98.96%, sys=0.21%, ctx=24, majf=0, minf=5567 00:19:17.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:17.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.910 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.910 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.910 00:19:17.910 Run status group 0 (all jobs): 00:19:17.910 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10067-10067msec 00:19:17.910 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=256MiB (268MB), run=5657-5657msec 00:19:18.478 ----------------------------------------------------- 00:19:18.478 Suppressions used: 00:19:18.478 count bytes template 00:19:18.478 1 5 /usr/src/fio/parse.c 00:19:18.478 2 192 /usr/src/fio/iolog.c 00:19:18.478 1 8 libtcmalloc_minimal.so 00:19:18.478 1 904 libcrypto.so 00:19:18.478 ----------------------------------------------------- 00:19:18.478 00:19:18.478 18:11:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:18.479 Remove shared memory files 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid61357 /dev/shm/spdk_tgt_trace.pid75983 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:18.479 00:19:18.479 real 1m13.727s 00:19:18.479 user 2m41.684s 00:19:18.479 sys 0m4.219s 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:18.479 ************************************ 00:19:18.479 END TEST ftl_fio_basic 00:19:18.479 ************************************ 00:19:18.479 18:11:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:18.738 18:11:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:18.738 18:11:11 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:18.738 18:11:11 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:18.738 18:11:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:18.738 ************************************ 00:19:18.738 START TEST ftl_bdevperf 00:19:18.738 ************************************ 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:18.738 * Looking for test storage... 00:19:18.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.738 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=77979 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 77979 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 77979 ']' 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:18.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:18.739 18:11:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:18.739 [2024-05-15 18:11:11.227871] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:19:18.739 [2024-05-15 18:11:11.228051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77979 ] 00:19:18.998 [2024-05-15 18:11:11.403657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.257 [2024-05-15 18:11:11.643636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:19.825 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:19:20.084 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:20.343 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:20.343 { 00:19:20.343 "name": "nvme0n1", 00:19:20.343 "aliases": [ 00:19:20.343 "96f5108a-2591-40a8-9507-52fed026b75e" 00:19:20.343 ], 00:19:20.343 "product_name": "NVMe disk", 00:19:20.343 "block_size": 4096, 00:19:20.343 "num_blocks": 1310720, 00:19:20.343 "uuid": "96f5108a-2591-40a8-9507-52fed026b75e", 00:19:20.343 "assigned_rate_limits": { 00:19:20.343 "rw_ios_per_sec": 0, 00:19:20.343 "rw_mbytes_per_sec": 0, 00:19:20.343 "r_mbytes_per_sec": 0, 00:19:20.343 "w_mbytes_per_sec": 0 00:19:20.343 }, 00:19:20.343 "claimed": true, 00:19:20.343 "claim_type": "read_many_write_one", 00:19:20.343 "zoned": false, 00:19:20.343 "supported_io_types": { 00:19:20.343 "read": true, 00:19:20.343 "write": true, 00:19:20.343 "unmap": true, 00:19:20.343 "write_zeroes": true, 00:19:20.343 "flush": true, 00:19:20.343 "reset": true, 00:19:20.343 "compare": true, 00:19:20.343 "compare_and_write": false, 00:19:20.343 "abort": true, 00:19:20.343 "nvme_admin": true, 00:19:20.343 "nvme_io": true 00:19:20.343 }, 00:19:20.343 "driver_specific": { 00:19:20.343 "nvme": [ 00:19:20.343 { 00:19:20.344 "pci_address": "0000:00:11.0", 00:19:20.344 "trid": { 00:19:20.344 "trtype": "PCIe", 00:19:20.344 "traddr": "0000:00:11.0" 00:19:20.344 }, 00:19:20.344 "ctrlr_data": { 00:19:20.344 "cntlid": 0, 00:19:20.344 "vendor_id": "0x1b36", 00:19:20.344 "model_number": "QEMU NVMe Ctrl", 00:19:20.344 "serial_number": "12341", 00:19:20.344 "firmware_revision": "8.0.0", 00:19:20.344 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:20.344 "oacs": { 00:19:20.344 "security": 0, 00:19:20.344 "format": 1, 00:19:20.344 "firmware": 0, 00:19:20.344 "ns_manage": 1 00:19:20.344 }, 00:19:20.344 "multi_ctrlr": false, 00:19:20.344 "ana_reporting": false 00:19:20.344 }, 00:19:20.344 "vs": { 00:19:20.344 "nvme_version": "1.4" 00:19:20.344 }, 00:19:20.344 "ns_data": { 00:19:20.344 "id": 1, 00:19:20.344 "can_share": false 00:19:20.344 } 00:19:20.344 } 00:19:20.344 ], 00:19:20.344 "mp_policy": "active_passive" 00:19:20.344 } 00:19:20.344 } 00:19:20.344 ]' 00:19:20.344 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:20.601 18:11:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:20.859 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=25517134-dba2-4abf-90eb-9fcdaaf439fc 00:19:20.859 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:20.859 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25517134-dba2-4abf-90eb-9fcdaaf439fc 00:19:21.118 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:21.376 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=4ecb6476-c00c-4f54-924c-510d626969eb 00:19:21.376 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4ecb6476-c00c-4f54-924c-510d626969eb 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:19:21.636 18:11:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:21.895 { 00:19:21.895 "name": "447433ab-22c5-4a68-b1e3-1a51dac04fb9", 00:19:21.895 "aliases": [ 00:19:21.895 "lvs/nvme0n1p0" 00:19:21.895 ], 00:19:21.895 "product_name": "Logical Volume", 00:19:21.895 "block_size": 4096, 00:19:21.895 "num_blocks": 26476544, 00:19:21.895 "uuid": "447433ab-22c5-4a68-b1e3-1a51dac04fb9", 00:19:21.895 "assigned_rate_limits": { 00:19:21.895 "rw_ios_per_sec": 0, 00:19:21.895 "rw_mbytes_per_sec": 0, 00:19:21.895 "r_mbytes_per_sec": 0, 00:19:21.895 "w_mbytes_per_sec": 0 00:19:21.895 }, 00:19:21.895 "claimed": false, 00:19:21.895 "zoned": false, 00:19:21.895 "supported_io_types": { 00:19:21.895 "read": true, 00:19:21.895 "write": true, 00:19:21.895 "unmap": true, 00:19:21.895 "write_zeroes": true, 00:19:21.895 "flush": false, 00:19:21.895 "reset": true, 00:19:21.895 "compare": false, 00:19:21.895 "compare_and_write": false, 00:19:21.895 "abort": false, 00:19:21.895 "nvme_admin": false, 00:19:21.895 "nvme_io": false 00:19:21.895 }, 00:19:21.895 "driver_specific": { 00:19:21.895 "lvol": { 00:19:21.895 "lvol_store_uuid": "4ecb6476-c00c-4f54-924c-510d626969eb", 00:19:21.895 "base_bdev": "nvme0n1", 00:19:21.895 "thin_provision": true, 00:19:21.895 "num_allocated_clusters": 0, 00:19:21.895 "snapshot": false, 00:19:21.895 "clone": false, 00:19:21.895 "esnap_clone": false 00:19:21.895 } 00:19:21.895 } 00:19:21.895 } 00:19:21.895 ]' 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:21.895 18:11:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:19:22.464 18:11:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:22.722 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:22.722 { 00:19:22.722 "name": "447433ab-22c5-4a68-b1e3-1a51dac04fb9", 00:19:22.722 "aliases": [ 00:19:22.722 "lvs/nvme0n1p0" 00:19:22.722 ], 00:19:22.722 "product_name": "Logical Volume", 00:19:22.722 "block_size": 4096, 00:19:22.722 "num_blocks": 26476544, 00:19:22.722 "uuid": "447433ab-22c5-4a68-b1e3-1a51dac04fb9", 00:19:22.722 "assigned_rate_limits": { 00:19:22.722 "rw_ios_per_sec": 0, 00:19:22.722 "rw_mbytes_per_sec": 0, 00:19:22.722 "r_mbytes_per_sec": 0, 00:19:22.722 "w_mbytes_per_sec": 0 00:19:22.722 }, 00:19:22.722 "claimed": false, 00:19:22.723 "zoned": false, 00:19:22.723 "supported_io_types": { 00:19:22.723 "read": true, 00:19:22.723 "write": true, 00:19:22.723 "unmap": true, 00:19:22.723 "write_zeroes": true, 00:19:22.723 "flush": false, 00:19:22.723 "reset": true, 00:19:22.723 "compare": false, 00:19:22.723 "compare_and_write": false, 00:19:22.723 "abort": false, 00:19:22.723 "nvme_admin": false, 00:19:22.723 "nvme_io": false 00:19:22.723 }, 00:19:22.723 "driver_specific": { 00:19:22.723 "lvol": { 00:19:22.723 "lvol_store_uuid": "4ecb6476-c00c-4f54-924c-510d626969eb", 00:19:22.723 "base_bdev": "nvme0n1", 00:19:22.723 "thin_provision": true, 00:19:22.723 "num_allocated_clusters": 0, 00:19:22.723 "snapshot": false, 00:19:22.723 "clone": false, 00:19:22.723 "esnap_clone": false 00:19:22.723 } 00:19:22.723 } 00:19:22.723 } 00:19:22.723 ]' 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:22.723 18:11:15 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:22.981 18:11:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:19:22.981 18:11:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:22.981 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:22.981 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:22.981 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:19:22.981 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:19:22.981 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 447433ab-22c5-4a68-b1e3-1a51dac04fb9 00:19:23.240 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:23.240 { 00:19:23.240 "name": "447433ab-22c5-4a68-b1e3-1a51dac04fb9", 00:19:23.240 "aliases": [ 00:19:23.240 "lvs/nvme0n1p0" 00:19:23.240 ], 00:19:23.240 "product_name": "Logical Volume", 00:19:23.240 "block_size": 4096, 00:19:23.240 "num_blocks": 26476544, 00:19:23.240 "uuid": "447433ab-22c5-4a68-b1e3-1a51dac04fb9", 00:19:23.240 "assigned_rate_limits": { 00:19:23.240 "rw_ios_per_sec": 0, 00:19:23.240 "rw_mbytes_per_sec": 0, 00:19:23.240 "r_mbytes_per_sec": 0, 00:19:23.240 "w_mbytes_per_sec": 0 00:19:23.240 }, 00:19:23.240 "claimed": false, 00:19:23.240 "zoned": false, 00:19:23.240 "supported_io_types": { 00:19:23.240 "read": true, 00:19:23.240 "write": true, 00:19:23.240 "unmap": true, 00:19:23.240 "write_zeroes": true, 00:19:23.240 "flush": false, 00:19:23.240 "reset": true, 00:19:23.240 "compare": false, 00:19:23.240 "compare_and_write": false, 00:19:23.240 "abort": false, 00:19:23.240 "nvme_admin": false, 00:19:23.240 "nvme_io": false 00:19:23.240 }, 00:19:23.240 "driver_specific": { 00:19:23.240 "lvol": { 00:19:23.240 "lvol_store_uuid": "4ecb6476-c00c-4f54-924c-510d626969eb", 00:19:23.240 "base_bdev": "nvme0n1", 00:19:23.240 "thin_provision": true, 00:19:23.240 "num_allocated_clusters": 0, 00:19:23.240 "snapshot": false, 00:19:23.240 "clone": false, 00:19:23.240 "esnap_clone": false 00:19:23.240 } 00:19:23.240 } 00:19:23.240 } 00:19:23.240 ]' 00:19:23.240 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:23.499 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:19:23.499 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:23.499 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:23.499 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:23.499 18:11:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:19:23.499 18:11:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:19:23.499 18:11:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 447433ab-22c5-4a68-b1e3-1a51dac04fb9 -c nvc0n1p0 --l2p_dram_limit 20 00:19:23.759 [2024-05-15 18:11:16.084833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.759 [2024-05-15 18:11:16.084919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:23.759 [2024-05-15 18:11:16.084943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:23.759 [2024-05-15 18:11:16.084958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.759 [2024-05-15 18:11:16.085046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.759 [2024-05-15 18:11:16.085067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.759 [2024-05-15 18:11:16.085084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:23.759 [2024-05-15 18:11:16.085098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.759 [2024-05-15 18:11:16.085131] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:23.759 [2024-05-15 18:11:16.086216] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:23.759 [2024-05-15 18:11:16.086250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.759 [2024-05-15 18:11:16.086273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.759 [2024-05-15 18:11:16.086287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:19:23.759 [2024-05-15 18:11:16.086316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.759 [2024-05-15 18:11:16.086486] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6bd31b7d-4cc4-4bc0-9894-18ccf30cc650 00:19:23.759 [2024-05-15 18:11:16.088446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.759 [2024-05-15 18:11:16.088501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:23.759 [2024-05-15 18:11:16.088522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:23.759 [2024-05-15 18:11:16.088534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.759 [2024-05-15 18:11:16.098371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.759 [2024-05-15 18:11:16.098414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.759 [2024-05-15 18:11:16.098450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.755 ms 00:19:23.760 [2024-05-15 18:11:16.098462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.760 [2024-05-15 18:11:16.098597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.760 [2024-05-15 18:11:16.098617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.760 [2024-05-15 18:11:16.098633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:23.760 [2024-05-15 18:11:16.098645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.760 [2024-05-15 18:11:16.098727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.760 [2024-05-15 18:11:16.098745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:23.760 [2024-05-15 18:11:16.098761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:23.760 [2024-05-15 18:11:16.098773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.760 [2024-05-15 18:11:16.098805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:23.760 [2024-05-15 18:11:16.103950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.760 [2024-05-15 18:11:16.103995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.760 [2024-05-15 18:11:16.104012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.157 ms 00:19:23.760 [2024-05-15 18:11:16.104027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.760 [2024-05-15 18:11:16.104067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.760 [2024-05-15 18:11:16.104085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:23.760 [2024-05-15 18:11:16.104099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:23.760 [2024-05-15 18:11:16.104113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.760 [2024-05-15 18:11:16.104170] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:23.760 [2024-05-15 18:11:16.104328] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:23.760 [2024-05-15 18:11:16.104386] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:23.760 [2024-05-15 18:11:16.104409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:23.760 [2024-05-15 18:11:16.104424] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:23.760 [2024-05-15 18:11:16.104440] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:23.760 [2024-05-15 18:11:16.104455] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:23.760 [2024-05-15 18:11:16.104468] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:23.760 [2024-05-15 18:11:16.104479] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:23.760 [2024-05-15 18:11:16.104491] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:23.760 [2024-05-15 18:11:16.104504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.760 [2024-05-15 18:11:16.104518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:23.760 [2024-05-15 18:11:16.104547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:19:23.760 [2024-05-15 18:11:16.104562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.760 [2024-05-15 18:11:16.104633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.760 [2024-05-15 18:11:16.104651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:23.760 [2024-05-15 18:11:16.104665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:23.760 [2024-05-15 18:11:16.104679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.760 [2024-05-15 18:11:16.104768] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:23.760 [2024-05-15 18:11:16.104788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:23.760 [2024-05-15 18:11:16.104801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.760 [2024-05-15 18:11:16.104815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.760 [2024-05-15 18:11:16.104827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:23.760 [2024-05-15 18:11:16.104839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:23.760 [2024-05-15 18:11:16.104850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:23.760 [2024-05-15 18:11:16.104862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:23.760 [2024-05-15 18:11:16.104873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:23.760 [2024-05-15 18:11:16.104886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.760 [2024-05-15 18:11:16.104896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:23.760 [2024-05-15 18:11:16.104925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:23.760 [2024-05-15 18:11:16.104935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.760 [2024-05-15 18:11:16.104960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:23.760 [2024-05-15 18:11:16.104971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:23.760 [2024-05-15 18:11:16.104983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.760 [2024-05-15 18:11:16.104993] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:23.760 [2024-05-15 18:11:16.105011] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:23.760 [2024-05-15 18:11:16.105023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.760 [2024-05-15 18:11:16.105036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:23.760 [2024-05-15 18:11:16.105047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:23.760 [2024-05-15 18:11:16.105060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:23.760 [2024-05-15 18:11:16.105071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:23.760 [2024-05-15 18:11:16.105083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:23.760 [2024-05-15 18:11:16.105093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:23.760 [2024-05-15 18:11:16.105105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:23.760 [2024-05-15 18:11:16.105116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:23.760 [2024-05-15 18:11:16.105128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:23.760 [2024-05-15 18:11:16.105138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:23.760 [2024-05-15 18:11:16.105280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:23.760 [2024-05-15 18:11:16.105293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:23.760 [2024-05-15 18:11:16.105305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:23.760 [2024-05-15 18:11:16.105316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:23.760 [2024-05-15 18:11:16.105330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:23.760 [2024-05-15 18:11:16.105341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:23.760 [2024-05-15 18:11:16.105372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:23.760 [2024-05-15 18:11:16.105402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.760 [2024-05-15 18:11:16.105415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:23.760 [2024-05-15 18:11:16.105426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:23.760 [2024-05-15 18:11:16.105438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.760 [2024-05-15 18:11:16.105449] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:23.760 [2024-05-15 18:11:16.105463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:23.760 [2024-05-15 18:11:16.105490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.760 [2024-05-15 18:11:16.105506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.760 [2024-05-15 18:11:16.105521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:23.760 [2024-05-15 18:11:16.105535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:23.760 [2024-05-15 18:11:16.105546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:23.760 [2024-05-15 18:11:16.105560] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:23.760 [2024-05-15 18:11:16.105576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:23.760 [2024-05-15 18:11:16.105593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:23.760 [2024-05-15 18:11:16.105607] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:23.760 [2024-05-15 18:11:16.105625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.760 [2024-05-15 18:11:16.105638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:23.760 [2024-05-15 18:11:16.105663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:23.760 [2024-05-15 18:11:16.105675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:23.761 [2024-05-15 18:11:16.105689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:23.761 [2024-05-15 18:11:16.105700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:23.761 [2024-05-15 18:11:16.105714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:23.761 [2024-05-15 18:11:16.105727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:23.761 [2024-05-15 18:11:16.105741] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:23.761 [2024-05-15 18:11:16.105753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:23.761 [2024-05-15 18:11:16.105767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:23.761 [2024-05-15 18:11:16.105795] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:23.761 [2024-05-15 18:11:16.105810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:23.761 [2024-05-15 18:11:16.105822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:23.761 [2024-05-15 18:11:16.105838] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:23.761 [2024-05-15 18:11:16.105852] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.761 [2024-05-15 18:11:16.105869] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:23.761 [2024-05-15 18:11:16.105882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:23.761 [2024-05-15 18:11:16.105896] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:23.761 [2024-05-15 18:11:16.105908] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:23.761 [2024-05-15 18:11:16.105924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.105936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:23.761 [2024-05-15 18:11:16.105951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:19:23.761 [2024-05-15 18:11:16.105963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.127657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.127705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.761 [2024-05-15 18:11:16.127743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.642 ms 00:19:23.761 [2024-05-15 18:11:16.127755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.127898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.127916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:23.761 [2024-05-15 18:11:16.127936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:23.761 [2024-05-15 18:11:16.127948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.179636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.179702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.761 [2024-05-15 18:11:16.179743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.615 ms 00:19:23.761 [2024-05-15 18:11:16.179757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.179848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.179872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.761 [2024-05-15 18:11:16.179897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:23.761 [2024-05-15 18:11:16.179910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.180606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.180635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.761 [2024-05-15 18:11:16.180653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:19:23.761 [2024-05-15 18:11:16.180666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.180820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.180839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.761 [2024-05-15 18:11:16.180857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:23.761 [2024-05-15 18:11:16.180869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.200457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.200525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.761 [2024-05-15 18:11:16.200565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.559 ms 00:19:23.761 [2024-05-15 18:11:16.200578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.761 [2024-05-15 18:11:16.216031] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:23.761 [2024-05-15 18:11:16.223818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.761 [2024-05-15 18:11:16.223881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.761 [2024-05-15 18:11:16.223902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.103 ms 00:19:23.761 [2024-05-15 18:11:16.223918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.019 [2024-05-15 18:11:16.297874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.019 [2024-05-15 18:11:16.297973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:24.019 [2024-05-15 18:11:16.297994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.900 ms 00:19:24.019 [2024-05-15 18:11:16.298009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.019 [2024-05-15 18:11:16.298085] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:24.019 [2024-05-15 18:11:16.298113] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:26.552 [2024-05-15 18:11:18.920004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.552 [2024-05-15 18:11:18.920089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:26.552 [2024-05-15 18:11:18.920116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2621.916 ms 00:19:26.552 [2024-05-15 18:11:18.920145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.552 [2024-05-15 18:11:18.920426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.552 [2024-05-15 18:11:18.920453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.552 [2024-05-15 18:11:18.920468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:19:26.552 [2024-05-15 18:11:18.920483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.552 [2024-05-15 18:11:18.950595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.552 [2024-05-15 18:11:18.950651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:26.552 [2024-05-15 18:11:18.950687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.045 ms 00:19:26.552 [2024-05-15 18:11:18.950703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.552 [2024-05-15 18:11:18.980102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.552 [2024-05-15 18:11:18.980151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:26.552 [2024-05-15 18:11:18.980171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.340 ms 00:19:26.552 [2024-05-15 18:11:18.980189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.552 [2024-05-15 18:11:18.980650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.552 [2024-05-15 18:11:18.980688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.552 [2024-05-15 18:11:18.980705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:19:26.552 [2024-05-15 18:11:18.980719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.811 [2024-05-15 18:11:19.059992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.811 [2024-05-15 18:11:19.060070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:26.811 [2024-05-15 18:11:19.060096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.211 ms 00:19:26.811 [2024-05-15 18:11:19.060111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.811 [2024-05-15 18:11:19.091513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.811 [2024-05-15 18:11:19.091590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:26.811 [2024-05-15 18:11:19.091611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.337 ms 00:19:26.811 [2024-05-15 18:11:19.091626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.811 [2024-05-15 18:11:19.094017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.811 [2024-05-15 18:11:19.094063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:26.811 [2024-05-15 18:11:19.094094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:19:26.811 [2024-05-15 18:11:19.094111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.811 [2024-05-15 18:11:19.125602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.812 [2024-05-15 18:11:19.125696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.812 [2024-05-15 18:11:19.125732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.422 ms 00:19:26.812 [2024-05-15 18:11:19.125747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.812 [2024-05-15 18:11:19.125812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.812 [2024-05-15 18:11:19.125835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.812 [2024-05-15 18:11:19.125849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:26.812 [2024-05-15 18:11:19.125864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.812 [2024-05-15 18:11:19.125991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.812 [2024-05-15 18:11:19.126013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.812 [2024-05-15 18:11:19.126026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:26.812 [2024-05-15 18:11:19.126040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.812 [2024-05-15 18:11:19.127598] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3042.170 ms, result 0 00:19:26.812 { 00:19:26.812 "name": "ftl0", 00:19:26.812 "uuid": "6bd31b7d-4cc4-4bc0-9894-18ccf30cc650" 00:19:26.812 } 00:19:26.812 18:11:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:26.812 18:11:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:19:26.812 18:11:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:19:27.071 18:11:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:27.071 [2024-05-15 18:11:19.567733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:27.330 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:27.330 Zero copy mechanism will not be used. 00:19:27.330 Running I/O for 4 seconds... 00:19:31.536 00:19:31.536 Latency(us) 00:19:31.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.536 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:31.536 ftl0 : 4.00 1727.23 114.70 0.00 0.00 605.22 253.21 3261.91 00:19:31.536 =================================================================================================================== 00:19:31.536 Total : 1727.23 114.70 0.00 0.00 605.22 253.21 3261.91 00:19:31.536 [2024-05-15 18:11:23.579275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:31.536 0 00:19:31.536 18:11:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:31.536 [2024-05-15 18:11:23.714436] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:31.536 Running I/O for 4 seconds... 00:19:35.742 00:19:35.742 Latency(us) 00:19:35.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.742 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:35.742 ftl0 : 4.02 7667.16 29.95 0.00 0.00 16651.03 329.54 33840.41 00:19:35.742 =================================================================================================================== 00:19:35.742 Total : 7667.16 29.95 0.00 0.00 16651.03 0.00 33840.41 00:19:35.742 [2024-05-15 18:11:27.743363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:35.742 0 00:19:35.742 18:11:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:35.742 [2024-05-15 18:11:27.894042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:35.742 Running I/O for 4 seconds... 00:19:39.930 00:19:39.930 Latency(us) 00:19:39.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.930 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:39.930 Verification LBA range: start 0x0 length 0x1400000 00:19:39.930 ftl0 : 4.01 5960.99 23.29 0.00 0.00 21398.44 374.23 31457.28 00:19:39.930 =================================================================================================================== 00:19:39.930 Total : 5960.99 23.29 0.00 0.00 21398.44 0.00 31457.28 00:19:39.930 [2024-05-15 18:11:31.925399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:19:39.930 l0 00:19:39.930 18:11:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:39.930 [2024-05-15 18:11:32.198919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.930 [2024-05-15 18:11:32.199222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:39.930 [2024-05-15 18:11:32.199402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:39.930 [2024-05-15 18:11:32.199469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.930 [2024-05-15 18:11:32.199647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.930 [2024-05-15 18:11:32.203332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.930 [2024-05-15 18:11:32.203474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:39.930 [2024-05-15 18:11:32.203598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:19:39.930 [2024-05-15 18:11:32.203648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.930 [2024-05-15 18:11:32.205241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.930 [2024-05-15 18:11:32.205471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:39.930 [2024-05-15 18:11:32.205596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:19:39.930 [2024-05-15 18:11:32.205647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.930 [2024-05-15 18:11:32.387192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.930 [2024-05-15 18:11:32.387468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:39.930 [2024-05-15 18:11:32.387617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 181.425 ms 00:19:39.930 [2024-05-15 18:11:32.387670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.930 [2024-05-15 18:11:32.394387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.930 [2024-05-15 18:11:32.394564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:39.930 [2024-05-15 18:11:32.394675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.533 ms 00:19:39.930 [2024-05-15 18:11:32.394723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.930 [2024-05-15 18:11:32.426467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.930 [2024-05-15 18:11:32.426639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:39.930 [2024-05-15 18:11:32.426760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.590 ms 00:19:39.930 [2024-05-15 18:11:32.426902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.191 [2024-05-15 18:11:32.445673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.191 [2024-05-15 18:11:32.445715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.191 [2024-05-15 18:11:32.445751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.704 ms 00:19:40.191 [2024-05-15 18:11:32.445767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.191 [2024-05-15 18:11:32.445971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.191 [2024-05-15 18:11:32.445998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.191 [2024-05-15 18:11:32.446016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:40.191 [2024-05-15 18:11:32.446028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.191 [2024-05-15 18:11:32.475759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.191 [2024-05-15 18:11:32.475830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:40.191 [2024-05-15 18:11:32.475852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.701 ms 00:19:40.191 [2024-05-15 18:11:32.475865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.191 [2024-05-15 18:11:32.505882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.191 [2024-05-15 18:11:32.505950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:40.191 [2024-05-15 18:11:32.505986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.966 ms 00:19:40.191 [2024-05-15 18:11:32.505998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.191 [2024-05-15 18:11:32.534960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.191 [2024-05-15 18:11:32.535014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.191 [2024-05-15 18:11:32.535049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.896 ms 00:19:40.191 [2024-05-15 18:11:32.535060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.191 [2024-05-15 18:11:32.564394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.191 [2024-05-15 18:11:32.564434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.191 [2024-05-15 18:11:32.564468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.230 ms 00:19:40.191 [2024-05-15 18:11:32.564479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.191 [2024-05-15 18:11:32.564525] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.191 [2024-05-15 18:11:32.564548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.564998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.191 [2024-05-15 18:11:32.565161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.565989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.566006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.192 [2024-05-15 18:11:32.566027] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.192 [2024-05-15 18:11:32.566042] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6bd31b7d-4cc4-4bc0-9894-18ccf30cc650 00:19:40.192 [2024-05-15 18:11:32.566055] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:40.192 [2024-05-15 18:11:32.566071] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.192 [2024-05-15 18:11:32.566083] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.192 [2024-05-15 18:11:32.566101] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.192 [2024-05-15 18:11:32.566113] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.192 [2024-05-15 18:11:32.566127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.192 [2024-05-15 18:11:32.566139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.192 [2024-05-15 18:11:32.566152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.192 [2024-05-15 18:11:32.566163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.192 [2024-05-15 18:11:32.566177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.192 [2024-05-15 18:11:32.566189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.192 [2024-05-15 18:11:32.566205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:19:40.192 [2024-05-15 18:11:32.566216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.192 [2024-05-15 18:11:32.582658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.192 [2024-05-15 18:11:32.582713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.192 [2024-05-15 18:11:32.582733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.380 ms 00:19:40.192 [2024-05-15 18:11:32.582745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.192 [2024-05-15 18:11:32.583014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.192 [2024-05-15 18:11:32.583039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.192 [2024-05-15 18:11:32.583056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:19:40.192 [2024-05-15 18:11:32.583068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.192 [2024-05-15 18:11:32.631210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.192 [2024-05-15 18:11:32.631288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.192 [2024-05-15 18:11:32.631339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.192 [2024-05-15 18:11:32.631352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.192 [2024-05-15 18:11:32.631432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.192 [2024-05-15 18:11:32.631447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.192 [2024-05-15 18:11:32.631461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.192 [2024-05-15 18:11:32.631472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.192 [2024-05-15 18:11:32.631613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.192 [2024-05-15 18:11:32.631632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.192 [2024-05-15 18:11:32.631652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.192 [2024-05-15 18:11:32.631664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.192 [2024-05-15 18:11:32.631694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.192 [2024-05-15 18:11:32.631709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.192 [2024-05-15 18:11:32.631723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.192 [2024-05-15 18:11:32.631735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.451 [2024-05-15 18:11:32.729440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.451 [2024-05-15 18:11:32.729526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.451 [2024-05-15 18:11:32.729549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.451 [2024-05-15 18:11:32.729563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.451 [2024-05-15 18:11:32.767771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.451 [2024-05-15 18:11:32.767867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.451 [2024-05-15 18:11:32.767899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.451 [2024-05-15 18:11:32.767912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.451 [2024-05-15 18:11:32.768013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.451 [2024-05-15 18:11:32.768032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.451 [2024-05-15 18:11:32.768048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.451 [2024-05-15 18:11:32.768063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.451 [2024-05-15 18:11:32.768130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.451 [2024-05-15 18:11:32.768147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.451 [2024-05-15 18:11:32.768162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.451 [2024-05-15 18:11:32.768174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.451 [2024-05-15 18:11:32.768325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.451 [2024-05-15 18:11:32.768346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.451 [2024-05-15 18:11:32.768362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.451 [2024-05-15 18:11:32.768374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.451 [2024-05-15 18:11:32.768443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.452 [2024-05-15 18:11:32.768467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.452 [2024-05-15 18:11:32.768483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.452 [2024-05-15 18:11:32.768496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.452 [2024-05-15 18:11:32.768546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.452 [2024-05-15 18:11:32.768562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.452 [2024-05-15 18:11:32.768576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.452 [2024-05-15 18:11:32.768588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.452 [2024-05-15 18:11:32.768653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.452 [2024-05-15 18:11:32.768674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.452 [2024-05-15 18:11:32.768690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.452 [2024-05-15 18:11:32.768702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.452 [2024-05-15 18:11:32.768864] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.892 ms, result 0 00:19:40.452 true 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 77979 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 77979 ']' 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 77979 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77979 00:19:40.452 killing process with pid 77979 00:19:40.452 Received shutdown signal, test time was about 4.000000 seconds 00:19:40.452 00:19:40.452 Latency(us) 00:19:40.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.452 =================================================================================================================== 00:19:40.452 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77979' 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 77979 00:19:40.452 18:11:32 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 77979 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:44.675 Remove shared memory files 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:44.675 00:19:44.675 real 0m25.885s 00:19:44.675 user 0m29.640s 00:19:44.675 sys 0m1.247s 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:44.675 18:11:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.675 ************************************ 00:19:44.675 END TEST ftl_bdevperf 00:19:44.675 ************************************ 00:19:44.675 18:11:36 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.675 18:11:36 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:44.675 18:11:36 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:44.675 18:11:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:44.675 ************************************ 00:19:44.675 START TEST ftl_trim 00:19:44.675 ************************************ 00:19:44.675 18:11:36 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.675 * Looking for test storage... 00:19:44.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78344 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:44.675 18:11:37 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78344 00:19:44.675 18:11:37 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 78344 ']' 00:19:44.675 18:11:37 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.675 18:11:37 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:44.675 18:11:37 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.675 18:11:37 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:44.675 18:11:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:44.934 [2024-05-15 18:11:37.194880] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:19:44.934 [2024-05-15 18:11:37.195083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78344 ] 00:19:44.934 [2024-05-15 18:11:37.367986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:45.193 [2024-05-15 18:11:37.602081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.193 [2024-05-15 18:11:37.602197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.193 [2024-05-15 18:11:37.602228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.128 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.128 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:19:46.128 18:11:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:46.128 18:11:38 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:46.128 18:11:38 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:46.128 18:11:38 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:46.128 18:11:38 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:46.128 18:11:38 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:46.412 18:11:38 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:46.412 18:11:38 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:46.412 18:11:38 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:46.412 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:19:46.412 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:46.412 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:19:46.412 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:19:46.412 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:46.670 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:46.670 { 00:19:46.670 "name": "nvme0n1", 00:19:46.670 "aliases": [ 00:19:46.670 "9364f959-cbfe-4156-a907-82d05f218c5a" 00:19:46.670 ], 00:19:46.670 "product_name": "NVMe disk", 00:19:46.670 "block_size": 4096, 00:19:46.670 "num_blocks": 1310720, 00:19:46.670 "uuid": "9364f959-cbfe-4156-a907-82d05f218c5a", 00:19:46.670 "assigned_rate_limits": { 00:19:46.670 "rw_ios_per_sec": 0, 00:19:46.670 "rw_mbytes_per_sec": 0, 00:19:46.670 "r_mbytes_per_sec": 0, 00:19:46.670 "w_mbytes_per_sec": 0 00:19:46.670 }, 00:19:46.670 "claimed": true, 00:19:46.670 "claim_type": "read_many_write_one", 00:19:46.670 "zoned": false, 00:19:46.670 "supported_io_types": { 00:19:46.670 "read": true, 00:19:46.670 "write": true, 00:19:46.670 "unmap": true, 00:19:46.670 "write_zeroes": true, 00:19:46.670 "flush": true, 00:19:46.670 "reset": true, 00:19:46.670 "compare": true, 00:19:46.670 "compare_and_write": false, 00:19:46.670 "abort": true, 00:19:46.670 "nvme_admin": true, 00:19:46.670 "nvme_io": true 00:19:46.670 }, 00:19:46.670 "driver_specific": { 00:19:46.670 "nvme": [ 00:19:46.670 { 00:19:46.670 "pci_address": "0000:00:11.0", 00:19:46.670 "trid": { 00:19:46.670 "trtype": "PCIe", 00:19:46.670 "traddr": "0000:00:11.0" 00:19:46.670 }, 00:19:46.670 "ctrlr_data": { 00:19:46.670 "cntlid": 0, 00:19:46.670 "vendor_id": "0x1b36", 00:19:46.670 "model_number": "QEMU NVMe Ctrl", 00:19:46.670 "serial_number": "12341", 00:19:46.670 "firmware_revision": "8.0.0", 00:19:46.670 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:46.670 "oacs": { 00:19:46.670 "security": 0, 00:19:46.670 "format": 1, 00:19:46.670 "firmware": 0, 00:19:46.670 "ns_manage": 1 00:19:46.670 }, 00:19:46.670 "multi_ctrlr": false, 00:19:46.670 "ana_reporting": false 00:19:46.670 }, 00:19:46.670 "vs": { 00:19:46.670 "nvme_version": "1.4" 00:19:46.670 }, 00:19:46.670 "ns_data": { 00:19:46.670 "id": 1, 00:19:46.670 "can_share": false 00:19:46.670 } 00:19:46.670 } 00:19:46.670 ], 00:19:46.670 "mp_policy": "active_passive" 00:19:46.670 } 00:19:46.670 } 00:19:46.670 ]' 00:19:46.670 18:11:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:46.670 18:11:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:19:46.670 18:11:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:46.670 18:11:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:19:46.670 18:11:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:19:46.670 18:11:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:19:46.670 18:11:39 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:46.670 18:11:39 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:46.670 18:11:39 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:46.670 18:11:39 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:46.670 18:11:39 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:46.928 18:11:39 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=4ecb6476-c00c-4f54-924c-510d626969eb 00:19:46.928 18:11:39 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:46.928 18:11:39 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ecb6476-c00c-4f54-924c-510d626969eb 00:19:47.188 18:11:39 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:47.446 18:11:39 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=608eade5-b99f-42d4-92d3-fafe41d7da1f 00:19:47.446 18:11:39 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 608eade5-b99f-42d4-92d3-fafe41d7da1f 00:19:47.704 18:11:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:47.704 18:11:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:47.704 18:11:40 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:47.704 18:11:40 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:47.704 18:11:40 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:47.704 18:11:40 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:47.704 18:11:40 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:47.704 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:47.704 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:47.704 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:19:47.704 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:19:47.704 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:47.962 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:47.962 { 00:19:47.962 "name": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:47.962 "aliases": [ 00:19:47.962 "lvs/nvme0n1p0" 00:19:47.962 ], 00:19:47.962 "product_name": "Logical Volume", 00:19:47.962 "block_size": 4096, 00:19:47.962 "num_blocks": 26476544, 00:19:47.962 "uuid": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:47.962 "assigned_rate_limits": { 00:19:47.962 "rw_ios_per_sec": 0, 00:19:47.962 "rw_mbytes_per_sec": 0, 00:19:47.962 "r_mbytes_per_sec": 0, 00:19:47.962 "w_mbytes_per_sec": 0 00:19:47.962 }, 00:19:47.962 "claimed": false, 00:19:47.962 "zoned": false, 00:19:47.962 "supported_io_types": { 00:19:47.962 "read": true, 00:19:47.962 "write": true, 00:19:47.962 "unmap": true, 00:19:47.962 "write_zeroes": true, 00:19:47.962 "flush": false, 00:19:47.962 "reset": true, 00:19:47.962 "compare": false, 00:19:47.962 "compare_and_write": false, 00:19:47.962 "abort": false, 00:19:47.962 "nvme_admin": false, 00:19:47.962 "nvme_io": false 00:19:47.962 }, 00:19:47.962 "driver_specific": { 00:19:47.962 "lvol": { 00:19:47.962 "lvol_store_uuid": "608eade5-b99f-42d4-92d3-fafe41d7da1f", 00:19:47.962 "base_bdev": "nvme0n1", 00:19:47.962 "thin_provision": true, 00:19:47.962 "num_allocated_clusters": 0, 00:19:47.962 "snapshot": false, 00:19:47.962 "clone": false, 00:19:47.962 "esnap_clone": false 00:19:47.962 } 00:19:47.962 } 00:19:47.962 } 00:19:47.962 ]' 00:19:47.962 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:47.962 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:19:47.962 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:48.220 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:48.220 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:48.220 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:19:48.220 18:11:40 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:48.220 18:11:40 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:48.220 18:11:40 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:48.478 18:11:40 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:48.478 18:11:40 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:48.478 18:11:40 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:48.478 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:48.478 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:48.478 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:19:48.478 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:19:48.478 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:48.736 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:48.736 { 00:19:48.736 "name": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:48.736 "aliases": [ 00:19:48.736 "lvs/nvme0n1p0" 00:19:48.736 ], 00:19:48.736 "product_name": "Logical Volume", 00:19:48.736 "block_size": 4096, 00:19:48.736 "num_blocks": 26476544, 00:19:48.736 "uuid": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:48.736 "assigned_rate_limits": { 00:19:48.736 "rw_ios_per_sec": 0, 00:19:48.736 "rw_mbytes_per_sec": 0, 00:19:48.736 "r_mbytes_per_sec": 0, 00:19:48.736 "w_mbytes_per_sec": 0 00:19:48.736 }, 00:19:48.736 "claimed": false, 00:19:48.736 "zoned": false, 00:19:48.736 "supported_io_types": { 00:19:48.736 "read": true, 00:19:48.736 "write": true, 00:19:48.736 "unmap": true, 00:19:48.736 "write_zeroes": true, 00:19:48.736 "flush": false, 00:19:48.736 "reset": true, 00:19:48.736 "compare": false, 00:19:48.736 "compare_and_write": false, 00:19:48.736 "abort": false, 00:19:48.736 "nvme_admin": false, 00:19:48.736 "nvme_io": false 00:19:48.736 }, 00:19:48.736 "driver_specific": { 00:19:48.736 "lvol": { 00:19:48.737 "lvol_store_uuid": "608eade5-b99f-42d4-92d3-fafe41d7da1f", 00:19:48.737 "base_bdev": "nvme0n1", 00:19:48.737 "thin_provision": true, 00:19:48.737 "num_allocated_clusters": 0, 00:19:48.737 "snapshot": false, 00:19:48.737 "clone": false, 00:19:48.737 "esnap_clone": false 00:19:48.737 } 00:19:48.737 } 00:19:48.737 } 00:19:48.737 ]' 00:19:48.737 18:11:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:48.737 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:19:48.737 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:48.737 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:48.737 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:48.737 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:19:48.737 18:11:41 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:48.737 18:11:41 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:48.995 18:11:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:48.995 18:11:41 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:48.995 18:11:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:48.995 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:48.995 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:48.995 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:19:48.995 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:19:48.995 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 00:19:49.254 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:49.254 { 00:19:49.254 "name": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:49.254 "aliases": [ 00:19:49.254 "lvs/nvme0n1p0" 00:19:49.254 ], 00:19:49.254 "product_name": "Logical Volume", 00:19:49.254 "block_size": 4096, 00:19:49.254 "num_blocks": 26476544, 00:19:49.254 "uuid": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:49.254 "assigned_rate_limits": { 00:19:49.254 "rw_ios_per_sec": 0, 00:19:49.254 "rw_mbytes_per_sec": 0, 00:19:49.254 "r_mbytes_per_sec": 0, 00:19:49.254 "w_mbytes_per_sec": 0 00:19:49.254 }, 00:19:49.254 "claimed": false, 00:19:49.254 "zoned": false, 00:19:49.254 "supported_io_types": { 00:19:49.254 "read": true, 00:19:49.254 "write": true, 00:19:49.254 "unmap": true, 00:19:49.254 "write_zeroes": true, 00:19:49.254 "flush": false, 00:19:49.254 "reset": true, 00:19:49.254 "compare": false, 00:19:49.254 "compare_and_write": false, 00:19:49.254 "abort": false, 00:19:49.254 "nvme_admin": false, 00:19:49.254 "nvme_io": false 00:19:49.254 }, 00:19:49.254 "driver_specific": { 00:19:49.254 "lvol": { 00:19:49.254 "lvol_store_uuid": "608eade5-b99f-42d4-92d3-fafe41d7da1f", 00:19:49.254 "base_bdev": "nvme0n1", 00:19:49.254 "thin_provision": true, 00:19:49.254 "num_allocated_clusters": 0, 00:19:49.254 "snapshot": false, 00:19:49.254 "clone": false, 00:19:49.254 "esnap_clone": false 00:19:49.254 } 00:19:49.254 } 00:19:49.254 } 00:19:49.254 ]' 00:19:49.254 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:49.254 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:19:49.254 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:49.254 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:49.254 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:49.254 18:11:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:19:49.254 18:11:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:49.254 18:11:41 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:49.513 [2024-05-15 18:11:41.908014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.908089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.513 [2024-05-15 18:11:41.908115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:49.513 [2024-05-15 18:11:41.908130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.911769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.911832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.513 [2024-05-15 18:11:41.911857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:19:49.513 [2024-05-15 18:11:41.911870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.912010] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.513 [2024-05-15 18:11:41.912993] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.513 [2024-05-15 18:11:41.913039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.913056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.513 [2024-05-15 18:11:41.913074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:19:49.513 [2024-05-15 18:11:41.913086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.913374] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 08a666aa-140a-4706-af52-1f2e14a3178c 00:19:49.513 [2024-05-15 18:11:41.915119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.915166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:49.513 [2024-05-15 18:11:41.915183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:49.513 [2024-05-15 18:11:41.915198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.924894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.924958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.513 [2024-05-15 18:11:41.924978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.576 ms 00:19:49.513 [2024-05-15 18:11:41.924992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.925224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.925253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.513 [2024-05-15 18:11:41.925268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:49.513 [2024-05-15 18:11:41.925283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.925362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.925391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.513 [2024-05-15 18:11:41.925406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:49.513 [2024-05-15 18:11:41.925420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.925468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:49.513 [2024-05-15 18:11:41.930639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.930678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.513 [2024-05-15 18:11:41.930699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.174 ms 00:19:49.513 [2024-05-15 18:11:41.930715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.930790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.930809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.513 [2024-05-15 18:11:41.930824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:49.513 [2024-05-15 18:11:41.930836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.930882] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:49.513 [2024-05-15 18:11:41.931016] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:49.513 [2024-05-15 18:11:41.931039] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.513 [2024-05-15 18:11:41.931056] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:49.513 [2024-05-15 18:11:41.931101] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931116] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931132] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:49.513 [2024-05-15 18:11:41.931144] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.513 [2024-05-15 18:11:41.931158] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:49.513 [2024-05-15 18:11:41.931169] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:49.513 [2024-05-15 18:11:41.931187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.931200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.513 [2024-05-15 18:11:41.931218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:49.513 [2024-05-15 18:11:41.931230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.931341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.513 [2024-05-15 18:11:41.931359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.513 [2024-05-15 18:11:41.931375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:49.513 [2024-05-15 18:11:41.931386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.513 [2024-05-15 18:11:41.931503] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.513 [2024-05-15 18:11:41.931519] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.513 [2024-05-15 18:11:41.931538] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.513 [2024-05-15 18:11:41.931578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.513 [2024-05-15 18:11:41.931617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.513 [2024-05-15 18:11:41.931641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.513 [2024-05-15 18:11:41.931652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:49.513 [2024-05-15 18:11:41.931666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.513 [2024-05-15 18:11:41.931677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.513 [2024-05-15 18:11:41.931692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:49.513 [2024-05-15 18:11:41.931703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.513 [2024-05-15 18:11:41.931727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:49.513 [2024-05-15 18:11:41.931742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:49.513 [2024-05-15 18:11:41.931769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:49.513 [2024-05-15 18:11:41.931781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.513 [2024-05-15 18:11:41.931818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.513 [2024-05-15 18:11:41.931858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.513 [2024-05-15 18:11:41.931904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.513 [2024-05-15 18:11:41.931941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:49.513 [2024-05-15 18:11:41.931968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.513 [2024-05-15 18:11:41.931979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:49.513 [2024-05-15 18:11:41.931992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.513 [2024-05-15 18:11:41.932002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.513 [2024-05-15 18:11:41.932016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:49.513 [2024-05-15 18:11:41.932026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.513 [2024-05-15 18:11:41.932041] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.513 [2024-05-15 18:11:41.932056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.513 [2024-05-15 18:11:41.932070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.514 [2024-05-15 18:11:41.932082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.514 [2024-05-15 18:11:41.932096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.514 [2024-05-15 18:11:41.932107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.514 [2024-05-15 18:11:41.932121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.514 [2024-05-15 18:11:41.932132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.514 [2024-05-15 18:11:41.932145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.514 [2024-05-15 18:11:41.932157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.514 [2024-05-15 18:11:41.932175] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.514 [2024-05-15 18:11:41.932190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.514 [2024-05-15 18:11:41.932207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:49.514 [2024-05-15 18:11:41.932220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:49.514 [2024-05-15 18:11:41.932235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:49.514 [2024-05-15 18:11:41.932247] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:49.514 [2024-05-15 18:11:41.932261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:49.514 [2024-05-15 18:11:41.932273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:49.514 [2024-05-15 18:11:41.932288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:49.514 [2024-05-15 18:11:41.932694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:49.514 [2024-05-15 18:11:41.932915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:49.514 [2024-05-15 18:11:41.933100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:49.514 [2024-05-15 18:11:41.933317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:49.514 [2024-05-15 18:11:41.933490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:49.514 [2024-05-15 18:11:41.933584] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:49.514 [2024-05-15 18:11:41.933644] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.514 [2024-05-15 18:11:41.933712] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.514 [2024-05-15 18:11:41.933841] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.514 [2024-05-15 18:11:41.933909] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.514 [2024-05-15 18:11:41.934049] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.514 [2024-05-15 18:11:41.934190] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.514 [2024-05-15 18:11:41.934365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.514 [2024-05-15 18:11:41.934533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.514 [2024-05-15 18:11:41.934560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:19:49.514 [2024-05-15 18:11:41.934577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.514 [2024-05-15 18:11:41.956727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.514 [2024-05-15 18:11:41.956793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.514 [2024-05-15 18:11:41.956814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.042 ms 00:19:49.514 [2024-05-15 18:11:41.956829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.514 [2024-05-15 18:11:41.957017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.514 [2024-05-15 18:11:41.957042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.514 [2024-05-15 18:11:41.957057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:49.514 [2024-05-15 18:11:41.957073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.514 [2024-05-15 18:11:42.002832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.514 [2024-05-15 18:11:42.002903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.514 [2024-05-15 18:11:42.002942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.713 ms 00:19:49.514 [2024-05-15 18:11:42.002961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.514 [2024-05-15 18:11:42.003089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.514 [2024-05-15 18:11:42.003113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.514 [2024-05-15 18:11:42.003127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.514 [2024-05-15 18:11:42.003142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.514 [2024-05-15 18:11:42.003768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.514 [2024-05-15 18:11:42.003831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.514 [2024-05-15 18:11:42.003849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:19:49.514 [2024-05-15 18:11:42.003864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.514 [2024-05-15 18:11:42.004022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.514 [2024-05-15 18:11:42.004049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.514 [2024-05-15 18:11:42.004063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:49.514 [2024-05-15 18:11:42.004080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.797 [2024-05-15 18:11:42.036929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.797 [2024-05-15 18:11:42.037001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.797 [2024-05-15 18:11:42.037040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.808 ms 00:19:49.797 [2024-05-15 18:11:42.037056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.797 [2024-05-15 18:11:42.051779] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:49.797 [2024-05-15 18:11:42.073217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.797 [2024-05-15 18:11:42.073288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.797 [2024-05-15 18:11:42.073348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.954 ms 00:19:49.797 [2024-05-15 18:11:42.073363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.797 [2024-05-15 18:11:42.155049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.797 [2024-05-15 18:11:42.155129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:49.797 [2024-05-15 18:11:42.155157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.540 ms 00:19:49.797 [2024-05-15 18:11:42.155174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.797 [2024-05-15 18:11:42.155379] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:49.797 [2024-05-15 18:11:42.155415] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:52.325 [2024-05-15 18:11:44.679487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.325 [2024-05-15 18:11:44.679566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:52.325 [2024-05-15 18:11:44.679608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2524.095 ms 00:19:52.325 [2024-05-15 18:11:44.679621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.325 [2024-05-15 18:11:44.679935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.325 [2024-05-15 18:11:44.679959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.325 [2024-05-15 18:11:44.679976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:52.325 [2024-05-15 18:11:44.679989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.325 [2024-05-15 18:11:44.711615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.325 [2024-05-15 18:11:44.711687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:52.325 [2024-05-15 18:11:44.711744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.569 ms 00:19:52.325 [2024-05-15 18:11:44.711757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.325 [2024-05-15 18:11:44.740452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.325 [2024-05-15 18:11:44.740496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:52.325 [2024-05-15 18:11:44.740534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.535 ms 00:19:52.325 [2024-05-15 18:11:44.740546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.325 [2024-05-15 18:11:44.741042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.325 [2024-05-15 18:11:44.741069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.325 [2024-05-15 18:11:44.741087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:19:52.325 [2024-05-15 18:11:44.741098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.325 [2024-05-15 18:11:44.815910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.325 [2024-05-15 18:11:44.815979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:52.325 [2024-05-15 18:11:44.816025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.728 ms 00:19:52.325 [2024-05-15 18:11:44.816037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.585 [2024-05-15 18:11:44.848172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.585 [2024-05-15 18:11:44.848253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:52.585 [2024-05-15 18:11:44.848293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.025 ms 00:19:52.585 [2024-05-15 18:11:44.848336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.585 [2024-05-15 18:11:44.852825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.585 [2024-05-15 18:11:44.852872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:52.585 [2024-05-15 18:11:44.852893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.379 ms 00:19:52.585 [2024-05-15 18:11:44.852905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.585 [2024-05-15 18:11:44.884702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.585 [2024-05-15 18:11:44.884743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.585 [2024-05-15 18:11:44.884782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.709 ms 00:19:52.585 [2024-05-15 18:11:44.884793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.585 [2024-05-15 18:11:44.884908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.585 [2024-05-15 18:11:44.884930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.585 [2024-05-15 18:11:44.884946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:52.585 [2024-05-15 18:11:44.884957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.585 [2024-05-15 18:11:44.885055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.585 [2024-05-15 18:11:44.885071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.585 [2024-05-15 18:11:44.885086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:52.585 [2024-05-15 18:11:44.885097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.585 [2024-05-15 18:11:44.886259] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.585 [2024-05-15 18:11:44.890250] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2977.908 ms, result 0 00:19:52.585 [2024-05-15 18:11:44.891173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.585 { 00:19:52.585 "name": "ftl0", 00:19:52.585 "uuid": "08a666aa-140a-4706-af52-1f2e14a3178c" 00:19:52.585 } 00:19:52.585 18:11:44 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:52.585 18:11:44 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:19:52.585 18:11:44 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:52.585 18:11:44 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:19:52.585 18:11:44 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:52.585 18:11:44 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:52.585 18:11:44 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:52.844 18:11:45 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:53.103 [ 00:19:53.103 { 00:19:53.103 "name": "ftl0", 00:19:53.103 "aliases": [ 00:19:53.103 "08a666aa-140a-4706-af52-1f2e14a3178c" 00:19:53.103 ], 00:19:53.103 "product_name": "FTL disk", 00:19:53.103 "block_size": 4096, 00:19:53.103 "num_blocks": 23592960, 00:19:53.103 "uuid": "08a666aa-140a-4706-af52-1f2e14a3178c", 00:19:53.103 "assigned_rate_limits": { 00:19:53.103 "rw_ios_per_sec": 0, 00:19:53.103 "rw_mbytes_per_sec": 0, 00:19:53.103 "r_mbytes_per_sec": 0, 00:19:53.103 "w_mbytes_per_sec": 0 00:19:53.103 }, 00:19:53.103 "claimed": false, 00:19:53.103 "zoned": false, 00:19:53.103 "supported_io_types": { 00:19:53.103 "read": true, 00:19:53.103 "write": true, 00:19:53.103 "unmap": true, 00:19:53.103 "write_zeroes": true, 00:19:53.103 "flush": true, 00:19:53.103 "reset": false, 00:19:53.103 "compare": false, 00:19:53.103 "compare_and_write": false, 00:19:53.103 "abort": false, 00:19:53.103 "nvme_admin": false, 00:19:53.103 "nvme_io": false 00:19:53.103 }, 00:19:53.103 "driver_specific": { 00:19:53.103 "ftl": { 00:19:53.103 "base_bdev": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:53.103 "cache": "nvc0n1p0" 00:19:53.103 } 00:19:53.103 } 00:19:53.103 } 00:19:53.103 ] 00:19:53.103 18:11:45 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:19:53.103 18:11:45 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:53.103 18:11:45 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:53.371 18:11:45 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:53.371 18:11:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:53.648 18:11:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:53.648 { 00:19:53.648 "name": "ftl0", 00:19:53.648 "aliases": [ 00:19:53.648 "08a666aa-140a-4706-af52-1f2e14a3178c" 00:19:53.648 ], 00:19:53.648 "product_name": "FTL disk", 00:19:53.648 "block_size": 4096, 00:19:53.648 "num_blocks": 23592960, 00:19:53.648 "uuid": "08a666aa-140a-4706-af52-1f2e14a3178c", 00:19:53.648 "assigned_rate_limits": { 00:19:53.648 "rw_ios_per_sec": 0, 00:19:53.648 "rw_mbytes_per_sec": 0, 00:19:53.648 "r_mbytes_per_sec": 0, 00:19:53.648 "w_mbytes_per_sec": 0 00:19:53.648 }, 00:19:53.648 "claimed": false, 00:19:53.648 "zoned": false, 00:19:53.648 "supported_io_types": { 00:19:53.648 "read": true, 00:19:53.648 "write": true, 00:19:53.648 "unmap": true, 00:19:53.648 "write_zeroes": true, 00:19:53.648 "flush": true, 00:19:53.648 "reset": false, 00:19:53.648 "compare": false, 00:19:53.648 "compare_and_write": false, 00:19:53.648 "abort": false, 00:19:53.648 "nvme_admin": false, 00:19:53.648 "nvme_io": false 00:19:53.648 }, 00:19:53.648 "driver_specific": { 00:19:53.648 "ftl": { 00:19:53.648 "base_bdev": "780dd35a-f5c3-4e9c-b3d0-fe60e9dc4ce0", 00:19:53.648 "cache": "nvc0n1p0" 00:19:53.648 } 00:19:53.648 } 00:19:53.648 } 00:19:53.648 ]' 00:19:53.648 18:11:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:53.648 18:11:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:53.648 18:11:46 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:53.907 [2024-05-15 18:11:46.250408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.907 [2024-05-15 18:11:46.250505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.907 [2024-05-15 18:11:46.250528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:53.907 [2024-05-15 18:11:46.250544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.907 [2024-05-15 18:11:46.250593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.907 [2024-05-15 18:11:46.254335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.907 [2024-05-15 18:11:46.254369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.907 [2024-05-15 18:11:46.254389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:19:53.907 [2024-05-15 18:11:46.254401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.907 [2024-05-15 18:11:46.255038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.907 [2024-05-15 18:11:46.255073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.907 [2024-05-15 18:11:46.255093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:19:53.908 [2024-05-15 18:11:46.255108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.908 [2024-05-15 18:11:46.258761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.908 [2024-05-15 18:11:46.258792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.908 [2024-05-15 18:11:46.258811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:19:53.908 [2024-05-15 18:11:46.258823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.908 [2024-05-15 18:11:46.266196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.908 [2024-05-15 18:11:46.266231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:53.908 [2024-05-15 18:11:46.266251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.305 ms 00:19:53.908 [2024-05-15 18:11:46.266263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.908 [2024-05-15 18:11:46.297811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.908 [2024-05-15 18:11:46.297866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.908 [2024-05-15 18:11:46.297906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.369 ms 00:19:53.908 [2024-05-15 18:11:46.297919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.908 [2024-05-15 18:11:46.316763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.908 [2024-05-15 18:11:46.316818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.908 [2024-05-15 18:11:46.316857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.740 ms 00:19:53.908 [2024-05-15 18:11:46.316870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.908 [2024-05-15 18:11:46.317133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.908 [2024-05-15 18:11:46.317155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.908 [2024-05-15 18:11:46.317172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:53.908 [2024-05-15 18:11:46.317185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.908 [2024-05-15 18:11:46.347492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.908 [2024-05-15 18:11:46.347582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:53.908 [2024-05-15 18:11:46.347608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.249 ms 00:19:53.908 [2024-05-15 18:11:46.347622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.908 [2024-05-15 18:11:46.380341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.908 [2024-05-15 18:11:46.380419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:53.908 [2024-05-15 18:11:46.380477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.538 ms 00:19:53.908 [2024-05-15 18:11:46.380490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.168 [2024-05-15 18:11:46.410140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.168 [2024-05-15 18:11:46.410201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.168 [2024-05-15 18:11:46.410226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.527 ms 00:19:54.168 [2024-05-15 18:11:46.410239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.168 [2024-05-15 18:11:46.440870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.168 [2024-05-15 18:11:46.440956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.168 [2024-05-15 18:11:46.440982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.396 ms 00:19:54.168 [2024-05-15 18:11:46.440995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.168 [2024-05-15 18:11:46.441126] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.168 [2024-05-15 18:11:46.441154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.441992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.169 [2024-05-15 18:11:46.442536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.170 [2024-05-15 18:11:46.442661] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.170 [2024-05-15 18:11:46.442677] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08a666aa-140a-4706-af52-1f2e14a3178c 00:19:54.170 [2024-05-15 18:11:46.442690] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.170 [2024-05-15 18:11:46.442708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.170 [2024-05-15 18:11:46.442719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.170 [2024-05-15 18:11:46.442734] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.170 [2024-05-15 18:11:46.442746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.170 [2024-05-15 18:11:46.442760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.170 [2024-05-15 18:11:46.442772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.170 [2024-05-15 18:11:46.442784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.170 [2024-05-15 18:11:46.442795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.170 [2024-05-15 18:11:46.442812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.170 [2024-05-15 18:11:46.442824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.170 [2024-05-15 18:11:46.442839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:19:54.170 [2024-05-15 18:11:46.442851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.170 [2024-05-15 18:11:46.459976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.170 [2024-05-15 18:11:46.460044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.170 [2024-05-15 18:11:46.460068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.073 ms 00:19:54.170 [2024-05-15 18:11:46.460081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.170 [2024-05-15 18:11:46.460468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.170 [2024-05-15 18:11:46.460495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.170 [2024-05-15 18:11:46.460512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:19:54.170 [2024-05-15 18:11:46.460525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.170 [2024-05-15 18:11:46.519250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.170 [2024-05-15 18:11:46.519351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.170 [2024-05-15 18:11:46.519393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.170 [2024-05-15 18:11:46.519407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.170 [2024-05-15 18:11:46.519573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.170 [2024-05-15 18:11:46.519593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.170 [2024-05-15 18:11:46.519608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.170 [2024-05-15 18:11:46.519620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.170 [2024-05-15 18:11:46.519717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.170 [2024-05-15 18:11:46.519736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.170 [2024-05-15 18:11:46.519752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.170 [2024-05-15 18:11:46.519763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.170 [2024-05-15 18:11:46.519822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.170 [2024-05-15 18:11:46.519839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.170 [2024-05-15 18:11:46.519853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.170 [2024-05-15 18:11:46.519865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.170 [2024-05-15 18:11:46.640302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.170 [2024-05-15 18:11:46.640380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.170 [2024-05-15 18:11:46.640405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.170 [2024-05-15 18:11:46.640418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.429 [2024-05-15 18:11:46.680948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.429 [2024-05-15 18:11:46.681014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.429 [2024-05-15 18:11:46.681055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.430 [2024-05-15 18:11:46.681068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.430 [2024-05-15 18:11:46.681196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.430 [2024-05-15 18:11:46.681220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.430 [2024-05-15 18:11:46.681235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.430 [2024-05-15 18:11:46.681247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.430 [2024-05-15 18:11:46.681355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.430 [2024-05-15 18:11:46.681374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.430 [2024-05-15 18:11:46.681390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.430 [2024-05-15 18:11:46.681402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.430 [2024-05-15 18:11:46.681561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.430 [2024-05-15 18:11:46.681581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.430 [2024-05-15 18:11:46.681600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.430 [2024-05-15 18:11:46.681612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.430 [2024-05-15 18:11:46.681698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.430 [2024-05-15 18:11:46.681716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.430 [2024-05-15 18:11:46.681732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.430 [2024-05-15 18:11:46.681743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.430 [2024-05-15 18:11:46.681837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.430 [2024-05-15 18:11:46.681861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.430 [2024-05-15 18:11:46.681881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.430 [2024-05-15 18:11:46.681893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.430 [2024-05-15 18:11:46.681971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.430 [2024-05-15 18:11:46.681988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.430 [2024-05-15 18:11:46.682003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.430 [2024-05-15 18:11:46.682015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.430 [2024-05-15 18:11:46.682252] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 431.822 ms, result 0 00:19:54.430 true 00:19:54.430 18:11:46 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78344 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 78344 ']' 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 78344 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78344 00:19:54.430 killing process with pid 78344 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78344' 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 78344 00:19:54.430 18:11:46 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 78344 00:19:59.699 18:11:51 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:00.265 65536+0 records in 00:20:00.265 65536+0 records out 00:20:00.265 268435456 bytes (268 MB, 256 MiB) copied, 1.11065 s, 242 MB/s 00:20:00.265 18:11:52 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:00.531 [2024-05-15 18:11:52.787244] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:00.531 [2024-05-15 18:11:52.787406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78543 ] 00:20:00.531 [2024-05-15 18:11:52.957477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.791 [2024-05-15 18:11:53.220930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.358 [2024-05-15 18:11:53.567899] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.358 [2024-05-15 18:11:53.567986] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.358 [2024-05-15 18:11:53.730789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.730871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.358 [2024-05-15 18:11:53.730921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:01.358 [2024-05-15 18:11:53.730939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.735070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.735119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.358 [2024-05-15 18:11:53.735149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.089 ms 00:20:01.358 [2024-05-15 18:11:53.735176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.735563] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.358 [2024-05-15 18:11:53.736784] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.358 [2024-05-15 18:11:53.736835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.736859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.358 [2024-05-15 18:11:53.736883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:20:01.358 [2024-05-15 18:11:53.736902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.739083] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:01.358 [2024-05-15 18:11:53.756240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.756335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:01.358 [2024-05-15 18:11:53.756382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.159 ms 00:20:01.358 [2024-05-15 18:11:53.756422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.756580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.756618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:01.358 [2024-05-15 18:11:53.756655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:01.358 [2024-05-15 18:11:53.756690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.766021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.766068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.358 [2024-05-15 18:11:53.766094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.158 ms 00:20:01.358 [2024-05-15 18:11:53.766113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.766350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.766384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.358 [2024-05-15 18:11:53.766405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:01.358 [2024-05-15 18:11:53.766424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.766504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.766561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.358 [2024-05-15 18:11:53.766584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:01.358 [2024-05-15 18:11:53.766604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.358 [2024-05-15 18:11:53.766707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:01.358 [2024-05-15 18:11:53.771899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.358 [2024-05-15 18:11:53.771943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.359 [2024-05-15 18:11:53.771984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.206 ms 00:20:01.359 [2024-05-15 18:11:53.772003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.359 [2024-05-15 18:11:53.772098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.359 [2024-05-15 18:11:53.772127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.359 [2024-05-15 18:11:53.772177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:01.359 [2024-05-15 18:11:53.772196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.359 [2024-05-15 18:11:53.772245] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:01.359 [2024-05-15 18:11:53.772293] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:01.359 [2024-05-15 18:11:53.772388] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:01.359 [2024-05-15 18:11:53.772433] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:01.359 [2024-05-15 18:11:53.772572] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:01.359 [2024-05-15 18:11:53.772603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.359 [2024-05-15 18:11:53.772645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:01.359 [2024-05-15 18:11:53.772672] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.359 [2024-05-15 18:11:53.772694] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.359 [2024-05-15 18:11:53.772716] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:01.359 [2024-05-15 18:11:53.772736] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.359 [2024-05-15 18:11:53.772754] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:01.359 [2024-05-15 18:11:53.772790] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:01.359 [2024-05-15 18:11:53.772812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.359 [2024-05-15 18:11:53.772839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.359 [2024-05-15 18:11:53.772860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:20:01.359 [2024-05-15 18:11:53.772879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.359 [2024-05-15 18:11:53.772994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.359 [2024-05-15 18:11:53.773031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.359 [2024-05-15 18:11:53.773055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:01.359 [2024-05-15 18:11:53.773075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.359 [2024-05-15 18:11:53.773201] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.359 [2024-05-15 18:11:53.773232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.359 [2024-05-15 18:11:53.773263] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.359 [2024-05-15 18:11:53.773284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.359 [2024-05-15 18:11:53.773345] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:01.359 [2024-05-15 18:11:53.773383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.359 [2024-05-15 18:11:53.773402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.359 [2024-05-15 18:11:53.773439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.359 [2024-05-15 18:11:53.773459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:01.359 [2024-05-15 18:11:53.773477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.359 [2024-05-15 18:11:53.773513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.359 [2024-05-15 18:11:53.773534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:01.359 [2024-05-15 18:11:53.773582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.359 [2024-05-15 18:11:53.773619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:01.359 [2024-05-15 18:11:53.773667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:01.359 [2024-05-15 18:11:53.773702] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:01.359 [2024-05-15 18:11:53.773721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:01.359 [2024-05-15 18:11:53.773737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.359 [2024-05-15 18:11:53.773755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.359 [2024-05-15 18:11:53.773788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.359 [2024-05-15 18:11:53.773821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.359 [2024-05-15 18:11:53.773857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.359 [2024-05-15 18:11:53.773876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.359 [2024-05-15 18:11:53.773928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.359 [2024-05-15 18:11:53.773947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:01.359 [2024-05-15 18:11:53.773967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.359 [2024-05-15 18:11:53.773985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.359 [2024-05-15 18:11:53.774003] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:01.359 [2024-05-15 18:11:53.774021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.359 [2024-05-15 18:11:53.774039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.359 [2024-05-15 18:11:53.774057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:01.359 [2024-05-15 18:11:53.774075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.359 [2024-05-15 18:11:53.774092] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.359 [2024-05-15 18:11:53.774112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.359 [2024-05-15 18:11:53.774132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.359 [2024-05-15 18:11:53.774152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.359 [2024-05-15 18:11:53.774171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.359 [2024-05-15 18:11:53.774190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.359 [2024-05-15 18:11:53.774224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.359 [2024-05-15 18:11:53.774243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.359 [2024-05-15 18:11:53.774261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.359 [2024-05-15 18:11:53.774280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.359 [2024-05-15 18:11:53.774299] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.359 [2024-05-15 18:11:53.774322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.359 [2024-05-15 18:11:53.774363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:01.359 [2024-05-15 18:11:53.774383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:01.359 [2024-05-15 18:11:53.774419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:01.359 [2024-05-15 18:11:53.774462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:01.359 [2024-05-15 18:11:53.774500] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:01.359 [2024-05-15 18:11:53.774520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:01.359 [2024-05-15 18:11:53.774540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:01.359 [2024-05-15 18:11:53.774560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:01.359 [2024-05-15 18:11:53.774581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:01.359 [2024-05-15 18:11:53.774602] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:01.359 [2024-05-15 18:11:53.774632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:01.359 [2024-05-15 18:11:53.774654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:01.359 [2024-05-15 18:11:53.774676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:01.359 [2024-05-15 18:11:53.774697] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.359 [2024-05-15 18:11:53.774725] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.359 [2024-05-15 18:11:53.774764] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.359 [2024-05-15 18:11:53.774786] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.359 [2024-05-15 18:11:53.774807] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.359 [2024-05-15 18:11:53.774828] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.359 [2024-05-15 18:11:53.774851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.359 [2024-05-15 18:11:53.774871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.359 [2024-05-15 18:11:53.774892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:20:01.359 [2024-05-15 18:11:53.774912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.360 [2024-05-15 18:11:53.797050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.360 [2024-05-15 18:11:53.797099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.360 [2024-05-15 18:11:53.797127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.020 ms 00:20:01.360 [2024-05-15 18:11:53.797146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.360 [2024-05-15 18:11:53.797376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.360 [2024-05-15 18:11:53.797422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.360 [2024-05-15 18:11:53.797443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:20:01.360 [2024-05-15 18:11:53.797461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.360 [2024-05-15 18:11:53.845796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.360 [2024-05-15 18:11:53.845889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.360 [2024-05-15 18:11:53.845926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.283 ms 00:20:01.360 [2024-05-15 18:11:53.845946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.360 [2024-05-15 18:11:53.846129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.360 [2024-05-15 18:11:53.846157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.360 [2024-05-15 18:11:53.846179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:01.360 [2024-05-15 18:11:53.846214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.360 [2024-05-15 18:11:53.847011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.360 [2024-05-15 18:11:53.847053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.360 [2024-05-15 18:11:53.847080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:20:01.360 [2024-05-15 18:11:53.847102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.360 [2024-05-15 18:11:53.847406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.360 [2024-05-15 18:11:53.847463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.360 [2024-05-15 18:11:53.847503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:20:01.360 [2024-05-15 18:11:53.847538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:53.870424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:53.870515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.619 [2024-05-15 18:11:53.870547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.791 ms 00:20:01.619 [2024-05-15 18:11:53.870568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:53.888764] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:01.619 [2024-05-15 18:11:53.888813] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:01.619 [2024-05-15 18:11:53.888870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:53.888891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:01.619 [2024-05-15 18:11:53.888914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.967 ms 00:20:01.619 [2024-05-15 18:11:53.888933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:53.917707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:53.917753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:01.619 [2024-05-15 18:11:53.917780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.652 ms 00:20:01.619 [2024-05-15 18:11:53.917852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:53.932309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:53.932353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:01.619 [2024-05-15 18:11:53.932402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.359 ms 00:20:01.619 [2024-05-15 18:11:53.932422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:53.946464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:53.946505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:01.619 [2024-05-15 18:11:53.946530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.927 ms 00:20:01.619 [2024-05-15 18:11:53.946549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:53.947229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:53.947274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.619 [2024-05-15 18:11:53.947337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:20:01.619 [2024-05-15 18:11:53.947377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:54.031641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:54.031768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:01.619 [2024-05-15 18:11:54.031806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.205 ms 00:20:01.619 [2024-05-15 18:11:54.031859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:54.047174] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.619 [2024-05-15 18:11:54.071497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:54.071613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.619 [2024-05-15 18:11:54.071664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.406 ms 00:20:01.619 [2024-05-15 18:11:54.071687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:54.071961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:54.071993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:01.619 [2024-05-15 18:11:54.072018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:01.619 [2024-05-15 18:11:54.072039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:54.072156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.619 [2024-05-15 18:11:54.072199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.619 [2024-05-15 18:11:54.072241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:01.619 [2024-05-15 18:11:54.072262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.619 [2024-05-15 18:11:54.077116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.620 [2024-05-15 18:11:54.077182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:01.620 [2024-05-15 18:11:54.077225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.732 ms 00:20:01.620 [2024-05-15 18:11:54.077245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.620 [2024-05-15 18:11:54.077391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.620 [2024-05-15 18:11:54.077420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.620 [2024-05-15 18:11:54.077460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:01.620 [2024-05-15 18:11:54.077498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.620 [2024-05-15 18:11:54.077584] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:01.620 [2024-05-15 18:11:54.077614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.620 [2024-05-15 18:11:54.077634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:01.620 [2024-05-15 18:11:54.077656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:01.620 [2024-05-15 18:11:54.077676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.620 [2024-05-15 18:11:54.118901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.620 [2024-05-15 18:11:54.119219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.620 [2024-05-15 18:11:54.119386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.156 ms 00:20:01.620 [2024-05-15 18:11:54.119513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.620 [2024-05-15 18:11:54.119702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.620 [2024-05-15 18:11:54.119823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.620 [2024-05-15 18:11:54.120013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:01.620 [2024-05-15 18:11:54.120067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.879 [2024-05-15 18:11:54.121543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.879 [2024-05-15 18:11:54.125738] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.316 ms, result 0 00:20:01.879 [2024-05-15 18:11:54.126772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.879 [2024-05-15 18:11:54.142263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:12.700  Copying: 22/256 [MB] (22 MBps) Copying: 47/256 [MB] (24 MBps) Copying: 71/256 [MB] (24 MBps) Copying: 95/256 [MB] (24 MBps) Copying: 120/256 [MB] (25 MBps) Copying: 143/256 [MB] (23 MBps) Copying: 167/256 [MB] (23 MBps) Copying: 190/256 [MB] (23 MBps) Copying: 215/256 [MB] (24 MBps) Copying: 239/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-05-15 18:12:04.832584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.700 [2024-05-15 18:12:04.844823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.700 [2024-05-15 18:12:04.844867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:12.700 [2024-05-15 18:12:04.844903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.700 [2024-05-15 18:12:04.844914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.700 [2024-05-15 18:12:04.844944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:12.700 [2024-05-15 18:12:04.848693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.700 [2024-05-15 18:12:04.848743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:12.700 [2024-05-15 18:12:04.848759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.728 ms 00:20:12.700 [2024-05-15 18:12:04.848770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.700 [2024-05-15 18:12:04.850578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.700 [2024-05-15 18:12:04.850620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:12.700 [2024-05-15 18:12:04.850682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.763 ms 00:20:12.700 [2024-05-15 18:12:04.850694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.700 [2024-05-15 18:12:04.858570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.700 [2024-05-15 18:12:04.858606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:12.700 [2024-05-15 18:12:04.858638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.852 ms 00:20:12.700 [2024-05-15 18:12:04.858662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.700 [2024-05-15 18:12:04.865350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.700 [2024-05-15 18:12:04.865380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:12.700 [2024-05-15 18:12:04.865410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:20:12.700 [2024-05-15 18:12:04.865426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:04.894488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.701 [2024-05-15 18:12:04.894543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:12.701 [2024-05-15 18:12:04.894576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.004 ms 00:20:12.701 [2024-05-15 18:12:04.894588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:04.911698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.701 [2024-05-15 18:12:04.911767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:12.701 [2024-05-15 18:12:04.911787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.043 ms 00:20:12.701 [2024-05-15 18:12:04.911798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:04.912033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.701 [2024-05-15 18:12:04.912052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:12.701 [2024-05-15 18:12:04.912066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:12.701 [2024-05-15 18:12:04.912078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:04.944578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.701 [2024-05-15 18:12:04.944625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:12.701 [2024-05-15 18:12:04.944660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.475 ms 00:20:12.701 [2024-05-15 18:12:04.944686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:04.975014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.701 [2024-05-15 18:12:04.975060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:12.701 [2024-05-15 18:12:04.975094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.264 ms 00:20:12.701 [2024-05-15 18:12:04.975104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:05.004483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.701 [2024-05-15 18:12:05.004541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:12.701 [2024-05-15 18:12:05.004575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.310 ms 00:20:12.701 [2024-05-15 18:12:05.004586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:05.033635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.701 [2024-05-15 18:12:05.033688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:12.701 [2024-05-15 18:12:05.033722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.940 ms 00:20:12.701 [2024-05-15 18:12:05.033733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.701 [2024-05-15 18:12:05.033798] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:12.701 [2024-05-15 18:12:05.033838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.033990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:12.701 [2024-05-15 18:12:05.034774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.034992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:12.702 [2024-05-15 18:12:05.035242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:12.702 [2024-05-15 18:12:05.035254] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08a666aa-140a-4706-af52-1f2e14a3178c 00:20:12.702 [2024-05-15 18:12:05.035266] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:12.702 [2024-05-15 18:12:05.035277] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:12.702 [2024-05-15 18:12:05.035288] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:12.702 [2024-05-15 18:12:05.035313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:12.702 [2024-05-15 18:12:05.035325] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:12.702 [2024-05-15 18:12:05.035336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:12.702 [2024-05-15 18:12:05.035348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:12.702 [2024-05-15 18:12:05.035358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:12.702 [2024-05-15 18:12:05.035368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:12.702 [2024-05-15 18:12:05.035390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.702 [2024-05-15 18:12:05.035400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:12.702 [2024-05-15 18:12:05.035423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:20:12.702 [2024-05-15 18:12:05.035445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.702 [2024-05-15 18:12:05.052116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.702 [2024-05-15 18:12:05.052174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:12.702 [2024-05-15 18:12:05.052223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.644 ms 00:20:12.702 [2024-05-15 18:12:05.052242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.702 [2024-05-15 18:12:05.052579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.702 [2024-05-15 18:12:05.052605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:12.702 [2024-05-15 18:12:05.052633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:12.702 [2024-05-15 18:12:05.052644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.702 [2024-05-15 18:12:05.101449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.702 [2024-05-15 18:12:05.101518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.702 [2024-05-15 18:12:05.101553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.702 [2024-05-15 18:12:05.101564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.702 [2024-05-15 18:12:05.101677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.702 [2024-05-15 18:12:05.101694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.702 [2024-05-15 18:12:05.101720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.702 [2024-05-15 18:12:05.101731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.702 [2024-05-15 18:12:05.101794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.702 [2024-05-15 18:12:05.101812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.702 [2024-05-15 18:12:05.101831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.702 [2024-05-15 18:12:05.101841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.702 [2024-05-15 18:12:05.101865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.702 [2024-05-15 18:12:05.101878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.702 [2024-05-15 18:12:05.101889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.702 [2024-05-15 18:12:05.101912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.961 [2024-05-15 18:12:05.203801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.961 [2024-05-15 18:12:05.203911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.961 [2024-05-15 18:12:05.203930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.961 [2024-05-15 18:12:05.203943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.961 [2024-05-15 18:12:05.244107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.961 [2024-05-15 18:12:05.244184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.961 [2024-05-15 18:12:05.244245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.961 [2024-05-15 18:12:05.244258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.961 [2024-05-15 18:12:05.244364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.961 [2024-05-15 18:12:05.244383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.961 [2024-05-15 18:12:05.244396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.961 [2024-05-15 18:12:05.244408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.961 [2024-05-15 18:12:05.244445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.961 [2024-05-15 18:12:05.244459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.961 [2024-05-15 18:12:05.244472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.961 [2024-05-15 18:12:05.244482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.962 [2024-05-15 18:12:05.244624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.962 [2024-05-15 18:12:05.244644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.962 [2024-05-15 18:12:05.244657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.962 [2024-05-15 18:12:05.244668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.962 [2024-05-15 18:12:05.244731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.962 [2024-05-15 18:12:05.244750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:12.962 [2024-05-15 18:12:05.244762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.962 [2024-05-15 18:12:05.244773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.962 [2024-05-15 18:12:05.244830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.962 [2024-05-15 18:12:05.244853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.962 [2024-05-15 18:12:05.244866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.962 [2024-05-15 18:12:05.244877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.962 [2024-05-15 18:12:05.244933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.962 [2024-05-15 18:12:05.244950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.962 [2024-05-15 18:12:05.244962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.962 [2024-05-15 18:12:05.244973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.962 [2024-05-15 18:12:05.245166] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.327 ms, result 0 00:20:14.339 00:20:14.339 00:20:14.339 18:12:06 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78679 00:20:14.339 18:12:06 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:14.339 18:12:06 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78679 00:20:14.339 18:12:06 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 78679 ']' 00:20:14.339 18:12:06 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.339 18:12:06 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:14.339 18:12:06 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.339 18:12:06 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:14.339 18:12:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:14.339 [2024-05-15 18:12:06.693526] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:14.339 [2024-05-15 18:12:06.693671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78679 ] 00:20:14.598 [2024-05-15 18:12:06.860177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.598 [2024-05-15 18:12:07.092391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.535 18:12:07 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.535 18:12:07 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:20:15.535 18:12:07 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:15.794 [2024-05-15 18:12:08.162943] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.794 [2024-05-15 18:12:08.163033] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.055 [2024-05-15 18:12:08.347580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.055 [2024-05-15 18:12:08.347658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.055 [2024-05-15 18:12:08.347690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:16.055 [2024-05-15 18:12:08.347705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.055 [2024-05-15 18:12:08.351925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.351972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.056 [2024-05-15 18:12:08.351999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:20:16.056 [2024-05-15 18:12:08.352015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.352161] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.056 [2024-05-15 18:12:08.353174] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.056 [2024-05-15 18:12:08.353229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.353247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.056 [2024-05-15 18:12:08.353268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:20:16.056 [2024-05-15 18:12:08.353287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.355354] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.056 [2024-05-15 18:12:08.372432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.372488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.056 [2024-05-15 18:12:08.372515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.088 ms 00:20:16.056 [2024-05-15 18:12:08.372536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.372664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.372694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.056 [2024-05-15 18:12:08.372711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:16.056 [2024-05-15 18:12:08.372730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.381687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.381760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.056 [2024-05-15 18:12:08.381780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.879 ms 00:20:16.056 [2024-05-15 18:12:08.381806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.381996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.382027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.056 [2024-05-15 18:12:08.382057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:16.056 [2024-05-15 18:12:08.382076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.382119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.382145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.056 [2024-05-15 18:12:08.382169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:16.056 [2024-05-15 18:12:08.382195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.382234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.056 [2024-05-15 18:12:08.387501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.387540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.056 [2024-05-15 18:12:08.387566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.273 ms 00:20:16.056 [2024-05-15 18:12:08.387582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.387680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.387700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.056 [2024-05-15 18:12:08.387726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:16.056 [2024-05-15 18:12:08.387741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.387782] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.056 [2024-05-15 18:12:08.387819] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:16.056 [2024-05-15 18:12:08.387906] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.056 [2024-05-15 18:12:08.387934] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:16.056 [2024-05-15 18:12:08.388024] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:16.056 [2024-05-15 18:12:08.388043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.056 [2024-05-15 18:12:08.388066] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:16.056 [2024-05-15 18:12:08.388084] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.056 [2024-05-15 18:12:08.388105] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.056 [2024-05-15 18:12:08.388120] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.056 [2024-05-15 18:12:08.388147] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.056 [2024-05-15 18:12:08.388175] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:16.056 [2024-05-15 18:12:08.388193] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:16.056 [2024-05-15 18:12:08.388207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.388230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.056 [2024-05-15 18:12:08.388245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:20:16.056 [2024-05-15 18:12:08.388263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.388396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.056 [2024-05-15 18:12:08.388423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.056 [2024-05-15 18:12:08.388438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:16.056 [2024-05-15 18:12:08.388463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.056 [2024-05-15 18:12:08.388555] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.056 [2024-05-15 18:12:08.388580] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.056 [2024-05-15 18:12:08.388594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.056 [2024-05-15 18:12:08.388615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.056 [2024-05-15 18:12:08.388629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.056 [2024-05-15 18:12:08.388646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.056 [2024-05-15 18:12:08.388660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.056 [2024-05-15 18:12:08.388709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.056 [2024-05-15 18:12:08.388722] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.056 [2024-05-15 18:12:08.388744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.056 [2024-05-15 18:12:08.388757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.056 [2024-05-15 18:12:08.388774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.056 [2024-05-15 18:12:08.388787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.056 [2024-05-15 18:12:08.388822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.056 [2024-05-15 18:12:08.388836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:16.056 [2024-05-15 18:12:08.388854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.056 [2024-05-15 18:12:08.388866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.056 [2024-05-15 18:12:08.388884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:16.056 [2024-05-15 18:12:08.388897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.056 [2024-05-15 18:12:08.388916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:16.056 [2024-05-15 18:12:08.388929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:16.056 [2024-05-15 18:12:08.388948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:16.056 [2024-05-15 18:12:08.388975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.056 [2024-05-15 18:12:08.388993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.056 [2024-05-15 18:12:08.389006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:16.056 [2024-05-15 18:12:08.389029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.056 [2024-05-15 18:12:08.389057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:16.056 [2024-05-15 18:12:08.389075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:16.056 [2024-05-15 18:12:08.389088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.056 [2024-05-15 18:12:08.389105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.056 [2024-05-15 18:12:08.389118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:16.056 [2024-05-15 18:12:08.389135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.056 [2024-05-15 18:12:08.389147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:16.056 [2024-05-15 18:12:08.389163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:16.056 [2024-05-15 18:12:08.389176] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.056 [2024-05-15 18:12:08.389193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.056 [2024-05-15 18:12:08.389205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.056 [2024-05-15 18:12:08.389222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.056 [2024-05-15 18:12:08.389235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:16.056 [2024-05-15 18:12:08.389252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.056 [2024-05-15 18:12:08.389264] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.056 [2024-05-15 18:12:08.389612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.056 [2024-05-15 18:12:08.389675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.056 [2024-05-15 18:12:08.389810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.056 [2024-05-15 18:12:08.389867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.056 [2024-05-15 18:12:08.389968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.056 [2024-05-15 18:12:08.390089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.056 [2024-05-15 18:12:08.390150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.056 [2024-05-15 18:12:08.390245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.057 [2024-05-15 18:12:08.390310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.057 [2024-05-15 18:12:08.390361] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.057 [2024-05-15 18:12:08.390453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.057 [2024-05-15 18:12:08.390585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:16.057 [2024-05-15 18:12:08.390653] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:16.057 [2024-05-15 18:12:08.390717] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:16.057 [2024-05-15 18:12:08.390908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:16.057 [2024-05-15 18:12:08.391068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:16.057 [2024-05-15 18:12:08.391154] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:16.057 [2024-05-15 18:12:08.391257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:16.057 [2024-05-15 18:12:08.391404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:16.057 [2024-05-15 18:12:08.391541] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:16.057 [2024-05-15 18:12:08.391667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:16.057 [2024-05-15 18:12:08.391798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:16.057 [2024-05-15 18:12:08.391936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:16.057 [2024-05-15 18:12:08.391958] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:16.057 [2024-05-15 18:12:08.391978] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.057 [2024-05-15 18:12:08.391995] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.057 [2024-05-15 18:12:08.392015] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.057 [2024-05-15 18:12:08.392030] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.057 [2024-05-15 18:12:08.392049] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.057 [2024-05-15 18:12:08.392063] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.057 [2024-05-15 18:12:08.392086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.392101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.057 [2024-05-15 18:12:08.392127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:20:16.057 [2024-05-15 18:12:08.392141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.415704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.415898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.057 [2024-05-15 18:12:08.416055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.457 ms 00:20:16.057 [2024-05-15 18:12:08.416117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.416349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.416410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.057 [2024-05-15 18:12:08.416527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:16.057 [2024-05-15 18:12:08.416582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.462587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.462871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.057 [2024-05-15 18:12:08.463023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.934 ms 00:20:16.057 [2024-05-15 18:12:08.463148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.463334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.463408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.057 [2024-05-15 18:12:08.463532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.057 [2024-05-15 18:12:08.463587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.464235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.464387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.057 [2024-05-15 18:12:08.464514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:20:16.057 [2024-05-15 18:12:08.464621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.464856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.464921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.057 [2024-05-15 18:12:08.465038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:20:16.057 [2024-05-15 18:12:08.465093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.488058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.488319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.057 [2024-05-15 18:12:08.488450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.896 ms 00:20:16.057 [2024-05-15 18:12:08.488587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.505522] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:16.057 [2024-05-15 18:12:08.505707] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:16.057 [2024-05-15 18:12:08.505869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.505922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:16.057 [2024-05-15 18:12:08.506057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.027 ms 00:20:16.057 [2024-05-15 18:12:08.506106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.535538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.535700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:16.057 [2024-05-15 18:12:08.535740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.297 ms 00:20:16.057 [2024-05-15 18:12:08.535763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.057 [2024-05-15 18:12:08.551563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.057 [2024-05-15 18:12:08.551610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:16.057 [2024-05-15 18:12:08.551636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.640 ms 00:20:16.057 [2024-05-15 18:12:08.551651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.566743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.566784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:16.316 [2024-05-15 18:12:08.566806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.985 ms 00:20:16.316 [2024-05-15 18:12:08.566820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.567414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.567451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.316 [2024-05-15 18:12:08.567472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:20:16.316 [2024-05-15 18:12:08.567486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.654261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.654344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:16.316 [2024-05-15 18:12:08.654391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.728 ms 00:20:16.316 [2024-05-15 18:12:08.654406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.667193] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:16.316 [2024-05-15 18:12:08.689133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.689241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.316 [2024-05-15 18:12:08.689266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.570 ms 00:20:16.316 [2024-05-15 18:12:08.689308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.689489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.689519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:16.316 [2024-05-15 18:12:08.689536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:16.316 [2024-05-15 18:12:08.689561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.689648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.689674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.316 [2024-05-15 18:12:08.689689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:16.316 [2024-05-15 18:12:08.689711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.693310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.693376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:16.316 [2024-05-15 18:12:08.693394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.565 ms 00:20:16.316 [2024-05-15 18:12:08.693413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.693479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.693505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.316 [2024-05-15 18:12:08.693526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:16.316 [2024-05-15 18:12:08.693547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.693622] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:16.316 [2024-05-15 18:12:08.693649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.693664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:16.316 [2024-05-15 18:12:08.693687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:16.316 [2024-05-15 18:12:08.693701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.724618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.724708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.316 [2024-05-15 18:12:08.724756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.872 ms 00:20:16.316 [2024-05-15 18:12:08.724771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.316 [2024-05-15 18:12:08.724916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.316 [2024-05-15 18:12:08.724937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.316 [2024-05-15 18:12:08.724960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:16.316 [2024-05-15 18:12:08.724975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.317 [2024-05-15 18:12:08.726243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:16.317 [2024-05-15 18:12:08.730113] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.263 ms, result 0 00:20:16.317 [2024-05-15 18:12:08.731331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.317 Some configs were skipped because the RPC state that can call them passed over. 00:20:16.317 18:12:08 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:16.603 [2024-05-15 18:12:09.083407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.603 [2024-05-15 18:12:09.083638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:16.603 [2024-05-15 18:12:09.083777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.573 ms 00:20:16.603 [2024-05-15 18:12:09.083939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.603 [2024-05-15 18:12:09.084055] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 33.214 ms, result 0 00:20:16.603 true 00:20:16.863 18:12:09 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:17.123 [2024-05-15 18:12:09.416236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.123 [2024-05-15 18:12:09.416521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:17.123 [2024-05-15 18:12:09.416663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.215 ms 00:20:17.123 [2024-05-15 18:12:09.416720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.123 [2024-05-15 18:12:09.416902] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.874 ms, result 0 00:20:17.123 true 00:20:17.123 18:12:09 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78679 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 78679 ']' 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 78679 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78679 00:20:17.123 killing process with pid 78679 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78679' 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 78679 00:20:17.123 18:12:09 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 78679 00:20:18.062 [2024-05-15 18:12:10.471766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.471872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:18.062 [2024-05-15 18:12:10.471898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.062 [2024-05-15 18:12:10.471915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.471950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:18.062 [2024-05-15 18:12:10.475515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.475551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:18.062 [2024-05-15 18:12:10.475571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.537 ms 00:20:18.062 [2024-05-15 18:12:10.475584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.475916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.475937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:18.062 [2024-05-15 18:12:10.475954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:18.062 [2024-05-15 18:12:10.475968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.480198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.480242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:18.062 [2024-05-15 18:12:10.480264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.198 ms 00:20:18.062 [2024-05-15 18:12:10.480310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.487466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.487503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:18.062 [2024-05-15 18:12:10.487540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.094 ms 00:20:18.062 [2024-05-15 18:12:10.487554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.500327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.500375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:18.062 [2024-05-15 18:12:10.500398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.705 ms 00:20:18.062 [2024-05-15 18:12:10.500412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.509246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.509313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.062 [2024-05-15 18:12:10.509351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.777 ms 00:20:18.062 [2024-05-15 18:12:10.509374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.509544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.509564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.062 [2024-05-15 18:12:10.509583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:18.062 [2024-05-15 18:12:10.509595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.522575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.522639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:18.062 [2024-05-15 18:12:10.522682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.922 ms 00:20:18.062 [2024-05-15 18:12:10.522697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.535197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.535236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:18.062 [2024-05-15 18:12:10.535277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.441 ms 00:20:18.062 [2024-05-15 18:12:10.535306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.547471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.547508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.062 [2024-05-15 18:12:10.547548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.078 ms 00:20:18.062 [2024-05-15 18:12:10.547562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.559907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.062 [2024-05-15 18:12:10.559949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.062 [2024-05-15 18:12:10.559975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:20:18.062 [2024-05-15 18:12:10.559989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.062 [2024-05-15 18:12:10.560043] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.062 [2024-05-15 18:12:10.560070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.062 [2024-05-15 18:12:10.560482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.560989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.063 [2024-05-15 18:12:10.561744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.322 [2024-05-15 18:12:10.561764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.322 [2024-05-15 18:12:10.561798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.322 [2024-05-15 18:12:10.561830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.323 [2024-05-15 18:12:10.561855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.323 [2024-05-15 18:12:10.561875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.323 [2024-05-15 18:12:10.561889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.323 [2024-05-15 18:12:10.561909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.323 [2024-05-15 18:12:10.561944] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.323 [2024-05-15 18:12:10.561980] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08a666aa-140a-4706-af52-1f2e14a3178c 00:20:18.323 [2024-05-15 18:12:10.562001] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.323 [2024-05-15 18:12:10.562025] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.323 [2024-05-15 18:12:10.562038] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.323 [2024-05-15 18:12:10.562058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.323 [2024-05-15 18:12:10.562072] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.323 [2024-05-15 18:12:10.562091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.323 [2024-05-15 18:12:10.562105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.323 [2024-05-15 18:12:10.562123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.323 [2024-05-15 18:12:10.562136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.323 [2024-05-15 18:12:10.562156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.323 [2024-05-15 18:12:10.562171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.323 [2024-05-15 18:12:10.562191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:20:18.323 [2024-05-15 18:12:10.562211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.580286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.323 [2024-05-15 18:12:10.580384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.323 [2024-05-15 18:12:10.580422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.995 ms 00:20:18.323 [2024-05-15 18:12:10.580438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.580788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.323 [2024-05-15 18:12:10.580814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.323 [2024-05-15 18:12:10.580838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:20:18.323 [2024-05-15 18:12:10.580860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.639594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.639664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.323 [2024-05-15 18:12:10.639705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.639720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.639897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.639921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.323 [2024-05-15 18:12:10.639938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.639956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.640030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.640050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.323 [2024-05-15 18:12:10.640072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.640085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.640119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.640136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.323 [2024-05-15 18:12:10.640153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.640166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.748375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.748445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.323 [2024-05-15 18:12:10.748469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.748482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.788006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.788091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.323 [2024-05-15 18:12:10.788118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.788133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.788265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.788298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.323 [2024-05-15 18:12:10.788315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.788360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.788409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.788425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.323 [2024-05-15 18:12:10.788442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.788454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.788600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.788624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.323 [2024-05-15 18:12:10.788641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.788655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.788731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.788750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:18.323 [2024-05-15 18:12:10.788766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.788779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.788830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.788848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.323 [2024-05-15 18:12:10.788864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.788876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.788938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.323 [2024-05-15 18:12:10.788955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.323 [2024-05-15 18:12:10.788970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.323 [2024-05-15 18:12:10.788982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.323 [2024-05-15 18:12:10.789153] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 317.356 ms, result 0 00:20:19.699 18:12:11 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:19.699 18:12:11 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:19.699 [2024-05-15 18:12:12.050476] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:19.699 [2024-05-15 18:12:12.050647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78748 ] 00:20:19.958 [2024-05-15 18:12:12.216656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.958 [2024-05-15 18:12:12.449162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.528 [2024-05-15 18:12:12.797803] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.528 [2024-05-15 18:12:12.797897] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.528 [2024-05-15 18:12:12.960411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.960487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:20.528 [2024-05-15 18:12:12.960526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:20.528 [2024-05-15 18:12:12.960539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.964252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.964343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.528 [2024-05-15 18:12:12.964391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:20:20.528 [2024-05-15 18:12:12.964409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.964603] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:20.528 [2024-05-15 18:12:12.965695] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:20.528 [2024-05-15 18:12:12.965741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.965757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.528 [2024-05-15 18:12:12.965782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:20:20.528 [2024-05-15 18:12:12.965793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.968002] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:20.528 [2024-05-15 18:12:12.984473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.984514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:20.528 [2024-05-15 18:12:12.984548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.473 ms 00:20:20.528 [2024-05-15 18:12:12.984560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.984673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.984713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:20.528 [2024-05-15 18:12:12.984726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:20.528 [2024-05-15 18:12:12.984738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.993869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.993916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.528 [2024-05-15 18:12:12.993948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.071 ms 00:20:20.528 [2024-05-15 18:12:12.993961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.994119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.994141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.528 [2024-05-15 18:12:12.994155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:20.528 [2024-05-15 18:12:12.994167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.994209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.994226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:20.528 [2024-05-15 18:12:12.994239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:20.528 [2024-05-15 18:12:12.994251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.994284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:20.528 [2024-05-15 18:12:12.999501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.999539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.528 [2024-05-15 18:12:12.999555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.226 ms 00:20:20.528 [2024-05-15 18:12:12.999567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.999666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:12.999685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:20.528 [2024-05-15 18:12:12.999698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:20.528 [2024-05-15 18:12:12.999710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.528 [2024-05-15 18:12:12.999741] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:20.528 [2024-05-15 18:12:12.999772] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:20.528 [2024-05-15 18:12:12.999814] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:20.528 [2024-05-15 18:12:12.999868] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:20.528 [2024-05-15 18:12:12.999953] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:20.528 [2024-05-15 18:12:12.999969] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:20.528 [2024-05-15 18:12:12.999984] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:20.528 [2024-05-15 18:12:12.999999] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:20.528 [2024-05-15 18:12:13.000014] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:20.528 [2024-05-15 18:12:13.000027] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:20.528 [2024-05-15 18:12:13.000038] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:20.528 [2024-05-15 18:12:13.000049] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:20.528 [2024-05-15 18:12:13.000066] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:20.528 [2024-05-15 18:12:13.000078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.528 [2024-05-15 18:12:13.000094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:20.528 [2024-05-15 18:12:13.000106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:20:20.528 [2024-05-15 18:12:13.000118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.529 [2024-05-15 18:12:13.000212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.529 [2024-05-15 18:12:13.000229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:20.529 [2024-05-15 18:12:13.000241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:20.529 [2024-05-15 18:12:13.000258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.529 [2024-05-15 18:12:13.000399] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:20.529 [2024-05-15 18:12:13.000419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:20.529 [2024-05-15 18:12:13.000452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:20.529 [2024-05-15 18:12:13.000500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:20.529 [2024-05-15 18:12:13.000535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.529 [2024-05-15 18:12:13.000557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:20.529 [2024-05-15 18:12:13.000567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:20.529 [2024-05-15 18:12:13.000577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.529 [2024-05-15 18:12:13.000602] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:20.529 [2024-05-15 18:12:13.000613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:20.529 [2024-05-15 18:12:13.000624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000635] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:20.529 [2024-05-15 18:12:13.000646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:20.529 [2024-05-15 18:12:13.000658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:20.529 [2024-05-15 18:12:13.000680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:20.529 [2024-05-15 18:12:13.000691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:20.529 [2024-05-15 18:12:13.000712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:20.529 [2024-05-15 18:12:13.000745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:20.529 [2024-05-15 18:12:13.000776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000796] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:20.529 [2024-05-15 18:12:13.000807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000828] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:20.529 [2024-05-15 18:12:13.000838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.529 [2024-05-15 18:12:13.000859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:20.529 [2024-05-15 18:12:13.000870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:20.529 [2024-05-15 18:12:13.000881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.529 [2024-05-15 18:12:13.000891] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:20.529 [2024-05-15 18:12:13.000903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:20.529 [2024-05-15 18:12:13.000914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.529 [2024-05-15 18:12:13.000926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.529 [2024-05-15 18:12:13.000938] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:20.529 [2024-05-15 18:12:13.000950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:20.529 [2024-05-15 18:12:13.000961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:20.529 [2024-05-15 18:12:13.000972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:20.529 [2024-05-15 18:12:13.000982] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:20.529 [2024-05-15 18:12:13.001006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:20.529 [2024-05-15 18:12:13.001021] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:20.529 [2024-05-15 18:12:13.001036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.529 [2024-05-15 18:12:13.001049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:20.529 [2024-05-15 18:12:13.001061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:20.529 [2024-05-15 18:12:13.001073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:20.529 [2024-05-15 18:12:13.001085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:20.529 [2024-05-15 18:12:13.001096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:20.529 [2024-05-15 18:12:13.001107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:20.529 [2024-05-15 18:12:13.001118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:20.529 [2024-05-15 18:12:13.001129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:20.529 [2024-05-15 18:12:13.001141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:20.529 [2024-05-15 18:12:13.001152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:20.529 [2024-05-15 18:12:13.001163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:20.529 [2024-05-15 18:12:13.001175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:20.529 [2024-05-15 18:12:13.001186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:20.529 [2024-05-15 18:12:13.001197] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:20.529 [2024-05-15 18:12:13.001210] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.529 [2024-05-15 18:12:13.001228] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:20.529 [2024-05-15 18:12:13.001240] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:20.529 [2024-05-15 18:12:13.001252] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:20.529 [2024-05-15 18:12:13.001263] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:20.529 [2024-05-15 18:12:13.001276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.529 [2024-05-15 18:12:13.001288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:20.529 [2024-05-15 18:12:13.001300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:20:20.529 [2024-05-15 18:12:13.001326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.529 [2024-05-15 18:12:13.022789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.529 [2024-05-15 18:12:13.022846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.529 [2024-05-15 18:12:13.022880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.386 ms 00:20:20.529 [2024-05-15 18:12:13.022892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.529 [2024-05-15 18:12:13.023065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.529 [2024-05-15 18:12:13.023085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.529 [2024-05-15 18:12:13.023098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:20.529 [2024-05-15 18:12:13.023109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.073803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.073866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.787 [2024-05-15 18:12:13.073902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.661 ms 00:20:20.787 [2024-05-15 18:12:13.073915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.074060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.074080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.787 [2024-05-15 18:12:13.074093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.787 [2024-05-15 18:12:13.074105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.074759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.074779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.787 [2024-05-15 18:12:13.074793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:20:20.787 [2024-05-15 18:12:13.074804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.074970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.074997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.787 [2024-05-15 18:12:13.075011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:20.787 [2024-05-15 18:12:13.075032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.096567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.096621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.787 [2024-05-15 18:12:13.096642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.501 ms 00:20:20.787 [2024-05-15 18:12:13.096655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.113746] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:20.787 [2024-05-15 18:12:13.113799] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:20.787 [2024-05-15 18:12:13.113819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.113832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:20.787 [2024-05-15 18:12:13.113848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.961 ms 00:20:20.787 [2024-05-15 18:12:13.113860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.143538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.143607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:20.787 [2024-05-15 18:12:13.143638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.557 ms 00:20:20.787 [2024-05-15 18:12:13.143652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.159947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.160010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:20.787 [2024-05-15 18:12:13.160030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.157 ms 00:20:20.787 [2024-05-15 18:12:13.160042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.175187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.175245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:20.787 [2024-05-15 18:12:13.175264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.040 ms 00:20:20.787 [2024-05-15 18:12:13.175276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.175866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.175904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.787 [2024-05-15 18:12:13.175921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:20.787 [2024-05-15 18:12:13.175933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.255508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.787 [2024-05-15 18:12:13.255584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:20.787 [2024-05-15 18:12:13.255621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.531 ms 00:20:20.787 [2024-05-15 18:12:13.255634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.787 [2024-05-15 18:12:13.268092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:21.045 [2024-05-15 18:12:13.289711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.045 [2024-05-15 18:12:13.289784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:21.045 [2024-05-15 18:12:13.289820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.896 ms 00:20:21.045 [2024-05-15 18:12:13.289832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-05-15 18:12:13.289990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.045 [2024-05-15 18:12:13.290012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:21.045 [2024-05-15 18:12:13.290027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:21.045 [2024-05-15 18:12:13.290039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-05-15 18:12:13.290117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.045 [2024-05-15 18:12:13.290136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:21.046 [2024-05-15 18:12:13.290150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:21.046 [2024-05-15 18:12:13.290162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.046 [2024-05-15 18:12:13.292367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.046 [2024-05-15 18:12:13.292418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:21.046 [2024-05-15 18:12:13.292450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:20:21.046 [2024-05-15 18:12:13.292461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.046 [2024-05-15 18:12:13.292517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.046 [2024-05-15 18:12:13.292534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:21.046 [2024-05-15 18:12:13.292553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:21.046 [2024-05-15 18:12:13.292565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.046 [2024-05-15 18:12:13.292614] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:21.046 [2024-05-15 18:12:13.292632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.046 [2024-05-15 18:12:13.292644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:21.046 [2024-05-15 18:12:13.292656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:21.046 [2024-05-15 18:12:13.292668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.046 [2024-05-15 18:12:13.323527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.046 [2024-05-15 18:12:13.323703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:21.046 [2024-05-15 18:12:13.323852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.830 ms 00:20:21.046 [2024-05-15 18:12:13.323906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.046 [2024-05-15 18:12:13.324070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.046 [2024-05-15 18:12:13.324132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:21.046 [2024-05-15 18:12:13.324234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:21.046 [2024-05-15 18:12:13.324282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.046 [2024-05-15 18:12:13.325629] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:21.046 [2024-05-15 18:12:13.329777] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 364.893 ms, result 0 00:20:21.046 [2024-05-15 18:12:13.330721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.046 [2024-05-15 18:12:13.346750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.540  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (23 MBps) Copying: 74/256 [MB] (23 MBps) Copying: 98/256 [MB] (24 MBps) Copying: 123/256 [MB] (24 MBps) Copying: 147/256 [MB] (24 MBps) Copying: 172/256 [MB] (24 MBps) Copying: 196/256 [MB] (24 MBps) Copying: 221/256 [MB] (24 MBps) Copying: 245/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-05-15 18:12:23.742357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.540 [2024-05-15 18:12:23.754928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.540 [2024-05-15 18:12:23.754989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.540 [2024-05-15 18:12:23.755010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.540 [2024-05-15 18:12:23.755023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.540 [2024-05-15 18:12:23.755069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:31.540 [2024-05-15 18:12:23.758676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.540 [2024-05-15 18:12:23.758713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.540 [2024-05-15 18:12:23.758730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.583 ms 00:20:31.540 [2024-05-15 18:12:23.758742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.540 [2024-05-15 18:12:23.759060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.759100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.541 [2024-05-15 18:12:23.759116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:31.541 [2024-05-15 18:12:23.759129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.762773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.762807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.541 [2024-05-15 18:12:23.762829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:20:31.541 [2024-05-15 18:12:23.762841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.770177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.770215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:31.541 [2024-05-15 18:12:23.770230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.308 ms 00:20:31.541 [2024-05-15 18:12:23.770242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.801412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.801481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.541 [2024-05-15 18:12:23.801502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.076 ms 00:20:31.541 [2024-05-15 18:12:23.801514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.819821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.819893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.541 [2024-05-15 18:12:23.819915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.214 ms 00:20:31.541 [2024-05-15 18:12:23.819940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.820228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.820255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.541 [2024-05-15 18:12:23.820276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:31.541 [2024-05-15 18:12:23.820315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.851326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.851383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:31.541 [2024-05-15 18:12:23.851403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.970 ms 00:20:31.541 [2024-05-15 18:12:23.851415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.881603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.881666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:31.541 [2024-05-15 18:12:23.881685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.108 ms 00:20:31.541 [2024-05-15 18:12:23.881698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.911737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.911809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.541 [2024-05-15 18:12:23.911838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.957 ms 00:20:31.541 [2024-05-15 18:12:23.911852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.941570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.541 [2024-05-15 18:12:23.941623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.541 [2024-05-15 18:12:23.941643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.604 ms 00:20:31.541 [2024-05-15 18:12:23.941655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.541 [2024-05-15 18:12:23.941729] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.541 [2024-05-15 18:12:23.941764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.941989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.541 [2024-05-15 18:12:23.942541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.942996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.542 [2024-05-15 18:12:23.943182] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.542 [2024-05-15 18:12:23.943194] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08a666aa-140a-4706-af52-1f2e14a3178c 00:20:31.542 [2024-05-15 18:12:23.943206] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.542 [2024-05-15 18:12:23.943225] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.542 [2024-05-15 18:12:23.943245] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.542 [2024-05-15 18:12:23.943267] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.542 [2024-05-15 18:12:23.943286] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.542 [2024-05-15 18:12:23.943315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.542 [2024-05-15 18:12:23.943328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.542 [2024-05-15 18:12:23.943338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.542 [2024-05-15 18:12:23.943349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.542 [2024-05-15 18:12:23.943361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.542 [2024-05-15 18:12:23.943373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.542 [2024-05-15 18:12:23.943393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:20:31.542 [2024-05-15 18:12:23.943405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.542 [2024-05-15 18:12:23.960346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.542 [2024-05-15 18:12:23.960417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.542 [2024-05-15 18:12:23.960436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.908 ms 00:20:31.542 [2024-05-15 18:12:23.960448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.542 [2024-05-15 18:12:23.960746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.542 [2024-05-15 18:12:23.960777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.542 [2024-05-15 18:12:23.960793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:20:31.542 [2024-05-15 18:12:23.960804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.542 [2024-05-15 18:12:24.010726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.542 [2024-05-15 18:12:24.010799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.542 [2024-05-15 18:12:24.010820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.542 [2024-05-15 18:12:24.010832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.542 [2024-05-15 18:12:24.010951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.542 [2024-05-15 18:12:24.010976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.542 [2024-05-15 18:12:24.010989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.542 [2024-05-15 18:12:24.011001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.542 [2024-05-15 18:12:24.011067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.542 [2024-05-15 18:12:24.011087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.542 [2024-05-15 18:12:24.011100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.542 [2024-05-15 18:12:24.011112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.542 [2024-05-15 18:12:24.011138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.542 [2024-05-15 18:12:24.011152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.542 [2024-05-15 18:12:24.011171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.542 [2024-05-15 18:12:24.011182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.117068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.117161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.802 [2024-05-15 18:12:24.117183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.117196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.158398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.158481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.802 [2024-05-15 18:12:24.158501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.158514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.158598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.158616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.802 [2024-05-15 18:12:24.158629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.158641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.158679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.158694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.802 [2024-05-15 18:12:24.158706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.158723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.158850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.158877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.802 [2024-05-15 18:12:24.158891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.158902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.158961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.158980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.802 [2024-05-15 18:12:24.158993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.159005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.159059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.159075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.802 [2024-05-15 18:12:24.159087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.159099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.159154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.802 [2024-05-15 18:12:24.159171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.802 [2024-05-15 18:12:24.159183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.802 [2024-05-15 18:12:24.159200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.802 [2024-05-15 18:12:24.159407] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.478 ms, result 0 00:20:33.187 00:20:33.187 00:20:33.187 18:12:25 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:33.187 18:12:25 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:33.447 18:12:25 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:33.706 [2024-05-15 18:12:26.005689] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:33.706 [2024-05-15 18:12:26.005878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78892 ] 00:20:33.706 [2024-05-15 18:12:26.182199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.273 [2024-05-15 18:12:26.498930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.532 [2024-05-15 18:12:26.850653] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:34.532 [2024-05-15 18:12:26.850751] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:34.532 [2024-05-15 18:12:27.008743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.532 [2024-05-15 18:12:27.008829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:34.532 [2024-05-15 18:12:27.008869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:34.532 [2024-05-15 18:12:27.008882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.532 [2024-05-15 18:12:27.012437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.532 [2024-05-15 18:12:27.012490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.532 [2024-05-15 18:12:27.012517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:20:34.532 [2024-05-15 18:12:27.012536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.532 [2024-05-15 18:12:27.012685] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:34.532 [2024-05-15 18:12:27.013688] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:34.532 [2024-05-15 18:12:27.013738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.532 [2024-05-15 18:12:27.013755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.532 [2024-05-15 18:12:27.013769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:20:34.532 [2024-05-15 18:12:27.013780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.532 [2024-05-15 18:12:27.015889] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:34.792 [2024-05-15 18:12:27.033621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.792 [2024-05-15 18:12:27.033722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:34.792 [2024-05-15 18:12:27.033746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.730 ms 00:20:34.792 [2024-05-15 18:12:27.033760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.792 [2024-05-15 18:12:27.033949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.792 [2024-05-15 18:12:27.033979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:34.792 [2024-05-15 18:12:27.033994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:34.792 [2024-05-15 18:12:27.034006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.792 [2024-05-15 18:12:27.042964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.792 [2024-05-15 18:12:27.043020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.792 [2024-05-15 18:12:27.043055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.891 ms 00:20:34.792 [2024-05-15 18:12:27.043067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.793 [2024-05-15 18:12:27.043231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.793 [2024-05-15 18:12:27.043254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.793 [2024-05-15 18:12:27.043269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:34.793 [2024-05-15 18:12:27.043281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.793 [2024-05-15 18:12:27.043618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.793 [2024-05-15 18:12:27.043681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:34.793 [2024-05-15 18:12:27.043725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:34.793 [2024-05-15 18:12:27.043764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.793 [2024-05-15 18:12:27.043849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:34.793 [2024-05-15 18:12:27.049091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.793 [2024-05-15 18:12:27.049131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.793 [2024-05-15 18:12:27.049165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:20:34.793 [2024-05-15 18:12:27.049176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.793 [2024-05-15 18:12:27.049278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.793 [2024-05-15 18:12:27.049299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:34.793 [2024-05-15 18:12:27.049335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:34.793 [2024-05-15 18:12:27.049350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.793 [2024-05-15 18:12:27.049385] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:34.793 [2024-05-15 18:12:27.049418] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:34.793 [2024-05-15 18:12:27.049461] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:34.793 [2024-05-15 18:12:27.049487] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:34.793 [2024-05-15 18:12:27.049570] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:34.793 [2024-05-15 18:12:27.049586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:34.793 [2024-05-15 18:12:27.049602] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:34.793 [2024-05-15 18:12:27.049617] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:34.793 [2024-05-15 18:12:27.049630] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:34.793 [2024-05-15 18:12:27.049650] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:34.793 [2024-05-15 18:12:27.049662] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:34.793 [2024-05-15 18:12:27.049673] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:34.793 [2024-05-15 18:12:27.049690] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:34.793 [2024-05-15 18:12:27.049702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.793 [2024-05-15 18:12:27.049718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:34.793 [2024-05-15 18:12:27.049731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:34.793 [2024-05-15 18:12:27.049743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.793 [2024-05-15 18:12:27.049823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.793 [2024-05-15 18:12:27.049841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:34.793 [2024-05-15 18:12:27.049854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:34.793 [2024-05-15 18:12:27.049865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.793 [2024-05-15 18:12:27.049953] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:34.793 [2024-05-15 18:12:27.049977] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:34.793 [2024-05-15 18:12:27.049997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:34.793 [2024-05-15 18:12:27.050031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:34.793 [2024-05-15 18:12:27.050063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.793 [2024-05-15 18:12:27.050083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:34.793 [2024-05-15 18:12:27.050093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:34.793 [2024-05-15 18:12:27.050103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.793 [2024-05-15 18:12:27.050129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:34.793 [2024-05-15 18:12:27.050141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:34.793 [2024-05-15 18:12:27.050151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:34.793 [2024-05-15 18:12:27.050175] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:34.793 [2024-05-15 18:12:27.050193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:34.793 [2024-05-15 18:12:27.050214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:34.793 [2024-05-15 18:12:27.050225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:34.793 [2024-05-15 18:12:27.050246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:34.793 [2024-05-15 18:12:27.050276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:34.793 [2024-05-15 18:12:27.050328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:34.793 [2024-05-15 18:12:27.050360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:34.793 [2024-05-15 18:12:27.050391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.793 [2024-05-15 18:12:27.050414] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:34.793 [2024-05-15 18:12:27.050425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:34.793 [2024-05-15 18:12:27.050436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.793 [2024-05-15 18:12:27.050447] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:34.793 [2024-05-15 18:12:27.050459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:34.793 [2024-05-15 18:12:27.050471] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.793 [2024-05-15 18:12:27.050494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:34.793 [2024-05-15 18:12:27.050505] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:34.793 [2024-05-15 18:12:27.050516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:34.793 [2024-05-15 18:12:27.050528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:34.793 [2024-05-15 18:12:27.050539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:34.793 [2024-05-15 18:12:27.050550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:34.793 [2024-05-15 18:12:27.050563] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:34.793 [2024-05-15 18:12:27.050577] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.793 [2024-05-15 18:12:27.050590] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:34.793 [2024-05-15 18:12:27.050602] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:34.793 [2024-05-15 18:12:27.050614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:34.793 [2024-05-15 18:12:27.050626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:34.794 [2024-05-15 18:12:27.050638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:34.794 [2024-05-15 18:12:27.050649] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:34.794 [2024-05-15 18:12:27.050661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:34.794 [2024-05-15 18:12:27.050672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:34.794 [2024-05-15 18:12:27.050684] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:34.794 [2024-05-15 18:12:27.050695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:34.794 [2024-05-15 18:12:27.050706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:34.794 [2024-05-15 18:12:27.050718] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:34.794 [2024-05-15 18:12:27.050730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:34.794 [2024-05-15 18:12:27.050742] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:34.794 [2024-05-15 18:12:27.050756] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.794 [2024-05-15 18:12:27.050775] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:34.794 [2024-05-15 18:12:27.050788] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:34.794 [2024-05-15 18:12:27.050800] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:34.794 [2024-05-15 18:12:27.050813] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:34.794 [2024-05-15 18:12:27.050826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.050839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:34.794 [2024-05-15 18:12:27.050851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:20:34.794 [2024-05-15 18:12:27.050862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.073636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.073949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:34.794 [2024-05-15 18:12:27.074075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.702 ms 00:20:34.794 [2024-05-15 18:12:27.074127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.074378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.074450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:34.794 [2024-05-15 18:12:27.074570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:34.794 [2024-05-15 18:12:27.074688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.134097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.134428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:34.794 [2024-05-15 18:12:27.134579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.321 ms 00:20:34.794 [2024-05-15 18:12:27.134635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.134821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.134925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.794 [2024-05-15 18:12:27.135018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:34.794 [2024-05-15 18:12:27.135147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.135914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.135961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.794 [2024-05-15 18:12:27.135990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:20:34.794 [2024-05-15 18:12:27.136010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.136245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.136313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.794 [2024-05-15 18:12:27.136352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:20:34.794 [2024-05-15 18:12:27.136375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.157739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.157823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.794 [2024-05-15 18:12:27.157847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.303 ms 00:20:34.794 [2024-05-15 18:12:27.157860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.175667] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:34.794 [2024-05-15 18:12:27.175751] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:34.794 [2024-05-15 18:12:27.175774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.175788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:34.794 [2024-05-15 18:12:27.175805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.691 ms 00:20:34.794 [2024-05-15 18:12:27.175817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.206148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.206264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:34.794 [2024-05-15 18:12:27.206342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.144 ms 00:20:34.794 [2024-05-15 18:12:27.206358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.224795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.224948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:34.794 [2024-05-15 18:12:27.225003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.239 ms 00:20:34.794 [2024-05-15 18:12:27.225027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.244021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.244105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:34.794 [2024-05-15 18:12:27.244129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.743 ms 00:20:34.794 [2024-05-15 18:12:27.244141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.794 [2024-05-15 18:12:27.244879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.794 [2024-05-15 18:12:27.244921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:34.794 [2024-05-15 18:12:27.244939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:20:34.794 [2024-05-15 18:12:27.244951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.327072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.327146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.053 [2024-05-15 18:12:27.327185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.075 ms 00:20:35.053 [2024-05-15 18:12:27.327198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.343732] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:35.053 [2024-05-15 18:12:27.367536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.367603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.053 [2024-05-15 18:12:27.367624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.101 ms 00:20:35.053 [2024-05-15 18:12:27.367637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.367775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.367797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.053 [2024-05-15 18:12:27.367812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:35.053 [2024-05-15 18:12:27.367824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.367927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.367946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.053 [2024-05-15 18:12:27.367960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:35.053 [2024-05-15 18:12:27.367978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.370079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.370120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:35.053 [2024-05-15 18:12:27.370137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:20:35.053 [2024-05-15 18:12:27.370149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.370191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.370208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.053 [2024-05-15 18:12:27.370227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:35.053 [2024-05-15 18:12:27.370240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.370288] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.053 [2024-05-15 18:12:27.370328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.370340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.053 [2024-05-15 18:12:27.370353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:35.053 [2024-05-15 18:12:27.370365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.403811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.053 [2024-05-15 18:12:27.403944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.053 [2024-05-15 18:12:27.403967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.404 ms 00:20:35.053 [2024-05-15 18:12:27.403989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.053 [2024-05-15 18:12:27.404223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.054 [2024-05-15 18:12:27.404248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.054 [2024-05-15 18:12:27.404271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:35.054 [2024-05-15 18:12:27.404283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.054 [2024-05-15 18:12:27.405764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.054 [2024-05-15 18:12:27.411783] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.588 ms, result 0 00:20:35.054 [2024-05-15 18:12:27.413063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.054 [2024-05-15 18:12:27.431060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.313  Copying: 4096/4096 [kB] (average 23 MBps)[2024-05-15 18:12:27.608980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.313 [2024-05-15 18:12:27.621901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.621956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:35.313 [2024-05-15 18:12:27.621978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:35.313 [2024-05-15 18:12:27.621991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.622037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:35.313 [2024-05-15 18:12:27.625814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.625848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:35.313 [2024-05-15 18:12:27.625864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:20:35.313 [2024-05-15 18:12:27.625876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.627770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.627822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:35.313 [2024-05-15 18:12:27.627864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:20:35.313 [2024-05-15 18:12:27.627878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.631867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.631908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:35.313 [2024-05-15 18:12:27.631933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.952 ms 00:20:35.313 [2024-05-15 18:12:27.631945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.639546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.639580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:35.313 [2024-05-15 18:12:27.639611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.524 ms 00:20:35.313 [2024-05-15 18:12:27.639623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.671033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.671084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:35.313 [2024-05-15 18:12:27.671103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.364 ms 00:20:35.313 [2024-05-15 18:12:27.671115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.689499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.689558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:35.313 [2024-05-15 18:12:27.689593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.303 ms 00:20:35.313 [2024-05-15 18:12:27.689615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.689800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.689839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:35.313 [2024-05-15 18:12:27.689853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:20:35.313 [2024-05-15 18:12:27.689865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.722475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.722560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:35.313 [2024-05-15 18:12:27.722598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.583 ms 00:20:35.313 [2024-05-15 18:12:27.722619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.754834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.754924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:35.313 [2024-05-15 18:12:27.754960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.120 ms 00:20:35.313 [2024-05-15 18:12:27.754972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.313 [2024-05-15 18:12:27.785233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.313 [2024-05-15 18:12:27.785290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:35.313 [2024-05-15 18:12:27.785325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.183 ms 00:20:35.313 [2024-05-15 18:12:27.785338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.574 [2024-05-15 18:12:27.815305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.574 [2024-05-15 18:12:27.815367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:35.574 [2024-05-15 18:12:27.815402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.844 ms 00:20:35.574 [2024-05-15 18:12:27.815414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.574 [2024-05-15 18:12:27.815484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:35.574 [2024-05-15 18:12:27.815539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.815994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.816007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.816020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.816032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:35.574 [2024-05-15 18:12:27.816045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:35.575 [2024-05-15 18:12:27.816908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:35.575 [2024-05-15 18:12:27.816920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08a666aa-140a-4706-af52-1f2e14a3178c 00:20:35.575 [2024-05-15 18:12:27.816933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:35.575 [2024-05-15 18:12:27.816944] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:35.575 [2024-05-15 18:12:27.816955] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:35.575 [2024-05-15 18:12:27.816967] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:35.575 [2024-05-15 18:12:27.816978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:35.575 [2024-05-15 18:12:27.816990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:35.575 [2024-05-15 18:12:27.817001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:35.575 [2024-05-15 18:12:27.817012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:35.575 [2024-05-15 18:12:27.817022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:35.575 [2024-05-15 18:12:27.817034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.575 [2024-05-15 18:12:27.817045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:35.575 [2024-05-15 18:12:27.817064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:20:35.575 [2024-05-15 18:12:27.817076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.575 [2024-05-15 18:12:27.834310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.575 [2024-05-15 18:12:27.834407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:35.575 [2024-05-15 18:12:27.834442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.204 ms 00:20:35.575 [2024-05-15 18:12:27.834454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.575 [2024-05-15 18:12:27.834747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.575 [2024-05-15 18:12:27.834780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:35.575 [2024-05-15 18:12:27.834794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:20:35.575 [2024-05-15 18:12:27.834806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.575 [2024-05-15 18:12:27.884597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:27.884667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.576 [2024-05-15 18:12:27.884705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:27.884717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:27.884836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:27.884862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.576 [2024-05-15 18:12:27.884875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:27.884887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:27.884954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:27.884973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.576 [2024-05-15 18:12:27.884987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:27.884998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:27.885025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:27.885040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.576 [2024-05-15 18:12:27.885059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:27.885071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:27.990462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:27.990538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.576 [2024-05-15 18:12:27.990558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:27.990570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.033679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:28.033763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.576 [2024-05-15 18:12:28.033785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:28.033798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.033887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:28.033907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.576 [2024-05-15 18:12:28.033920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:28.033932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.033970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:28.033984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.576 [2024-05-15 18:12:28.033996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:28.034015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.034149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:28.034170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.576 [2024-05-15 18:12:28.034183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:28.034195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.034259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:28.034278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:35.576 [2024-05-15 18:12:28.034290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:28.034335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.034394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:28.034411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.576 [2024-05-15 18:12:28.034423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:28.034434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.034489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.576 [2024-05-15 18:12:28.034505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.576 [2024-05-15 18:12:28.034518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.576 [2024-05-15 18:12:28.034535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.576 [2024-05-15 18:12:28.034711] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.814 ms, result 0 00:20:36.986 00:20:36.986 00:20:36.986 18:12:29 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78928 00:20:36.986 18:12:29 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78928 00:20:36.986 18:12:29 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:36.986 18:12:29 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 78928 ']' 00:20:36.986 18:12:29 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.986 18:12:29 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:36.986 18:12:29 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.986 18:12:29 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:36.986 18:12:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:36.986 [2024-05-15 18:12:29.325405] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:36.986 [2024-05-15 18:12:29.325632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78928 ] 00:20:37.244 [2024-05-15 18:12:29.504728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.502 [2024-05-15 18:12:29.746140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.068 18:12:30 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.068 18:12:30 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:20:38.068 18:12:30 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:38.327 [2024-05-15 18:12:30.784639] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.327 [2024-05-15 18:12:30.784740] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:38.587 [2024-05-15 18:12:30.958933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.959005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:38.587 [2024-05-15 18:12:30.959031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:38.587 [2024-05-15 18:12:30.959044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.962499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.962544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.587 [2024-05-15 18:12:30.962567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:20:38.587 [2024-05-15 18:12:30.962580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.962710] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:38.587 [2024-05-15 18:12:30.963672] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:38.587 [2024-05-15 18:12:30.963720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.963736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.587 [2024-05-15 18:12:30.963760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:20:38.587 [2024-05-15 18:12:30.963777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.965889] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:38.587 [2024-05-15 18:12:30.983004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.983080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:38.587 [2024-05-15 18:12:30.983105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.120 ms 00:20:38.587 [2024-05-15 18:12:30.983121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.983239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.983265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:38.587 [2024-05-15 18:12:30.983279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:38.587 [2024-05-15 18:12:30.983326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.992176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.992237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.587 [2024-05-15 18:12:30.992256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.772 ms 00:20:38.587 [2024-05-15 18:12:30.992274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.992459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.992487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.587 [2024-05-15 18:12:30.992502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:38.587 [2024-05-15 18:12:30.992516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.992562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.992582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:38.587 [2024-05-15 18:12:30.992598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:38.587 [2024-05-15 18:12:30.992613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.992651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:38.587 [2024-05-15 18:12:30.997637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.997677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.587 [2024-05-15 18:12:30.997698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:20:38.587 [2024-05-15 18:12:30.997711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.997801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.587 [2024-05-15 18:12:30.997820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:38.587 [2024-05-15 18:12:30.997839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:38.587 [2024-05-15 18:12:30.997851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.587 [2024-05-15 18:12:30.997885] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:38.587 [2024-05-15 18:12:30.997914] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:38.587 [2024-05-15 18:12:30.997973] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:38.587 [2024-05-15 18:12:30.997997] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:38.587 [2024-05-15 18:12:30.998083] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:38.587 [2024-05-15 18:12:30.998100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:38.588 [2024-05-15 18:12:30.998117] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:38.588 [2024-05-15 18:12:30.998132] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998148] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998164] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:38.588 [2024-05-15 18:12:30.998178] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:38.588 [2024-05-15 18:12:30.998189] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:38.588 [2024-05-15 18:12:30.998203] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:38.588 [2024-05-15 18:12:30.998215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.588 [2024-05-15 18:12:30.998232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:38.588 [2024-05-15 18:12:30.998244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:20:38.588 [2024-05-15 18:12:30.998258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.588 [2024-05-15 18:12:30.998363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.588 [2024-05-15 18:12:30.998385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:38.588 [2024-05-15 18:12:30.998401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:38.588 [2024-05-15 18:12:30.998415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.588 [2024-05-15 18:12:30.998517] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:38.588 [2024-05-15 18:12:30.998543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:38.588 [2024-05-15 18:12:30.998557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:38.588 [2024-05-15 18:12:30.998601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:38.588 [2024-05-15 18:12:30.998639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.588 [2024-05-15 18:12:30.998666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:38.588 [2024-05-15 18:12:30.998679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:38.588 [2024-05-15 18:12:30.998690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.588 [2024-05-15 18:12:30.998704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:38.588 [2024-05-15 18:12:30.998716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:38.588 [2024-05-15 18:12:30.998729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:38.588 [2024-05-15 18:12:30.998754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:38.588 [2024-05-15 18:12:30.998765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:38.588 [2024-05-15 18:12:30.998791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:38.588 [2024-05-15 18:12:30.998805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:38.588 [2024-05-15 18:12:30.998851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:38.588 [2024-05-15 18:12:30.998891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:38.588 [2024-05-15 18:12:30.998928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:38.588 [2024-05-15 18:12:30.998964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:38.588 [2024-05-15 18:12:30.998977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:38.588 [2024-05-15 18:12:30.998988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:38.588 [2024-05-15 18:12:30.999002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:38.588 [2024-05-15 18:12:30.999013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.588 [2024-05-15 18:12:30.999026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:38.588 [2024-05-15 18:12:30.999037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:38.588 [2024-05-15 18:12:30.999050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.588 [2024-05-15 18:12:30.999061] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:38.588 [2024-05-15 18:12:30.999078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:38.588 [2024-05-15 18:12:30.999090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.588 [2024-05-15 18:12:30.999105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.588 [2024-05-15 18:12:30.999120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:38.588 [2024-05-15 18:12:30.999134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:38.588 [2024-05-15 18:12:30.999145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:38.588 [2024-05-15 18:12:30.999159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:38.588 [2024-05-15 18:12:30.999170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:38.588 [2024-05-15 18:12:30.999186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:38.588 [2024-05-15 18:12:30.999199] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:38.588 [2024-05-15 18:12:30.999216] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.588 [2024-05-15 18:12:30.999230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:38.588 [2024-05-15 18:12:30.999245] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:38.588 [2024-05-15 18:12:30.999257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:38.588 [2024-05-15 18:12:30.999272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:38.588 [2024-05-15 18:12:30.999284] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:38.588 [2024-05-15 18:12:30.999324] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:38.588 [2024-05-15 18:12:30.999339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:38.588 [2024-05-15 18:12:30.999354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:38.588 [2024-05-15 18:12:30.999366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:38.588 [2024-05-15 18:12:30.999380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:38.588 [2024-05-15 18:12:30.999392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:38.588 [2024-05-15 18:12:30.999406] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:38.588 [2024-05-15 18:12:30.999419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:38.588 [2024-05-15 18:12:30.999433] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:38.588 [2024-05-15 18:12:30.999446] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.588 [2024-05-15 18:12:30.999462] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:38.588 [2024-05-15 18:12:30.999474] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:38.588 [2024-05-15 18:12:30.999489] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:38.588 [2024-05-15 18:12:30.999502] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:38.588 [2024-05-15 18:12:30.999752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.588 [2024-05-15 18:12:30.999765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:38.588 [2024-05-15 18:12:30.999782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:20:38.588 [2024-05-15 18:12:30.999793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.588 [2024-05-15 18:12:31.021971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.588 [2024-05-15 18:12:31.022034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.588 [2024-05-15 18:12:31.022059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.086 ms 00:20:38.588 [2024-05-15 18:12:31.022072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.588 [2024-05-15 18:12:31.022257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.588 [2024-05-15 18:12:31.022277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:38.588 [2024-05-15 18:12:31.022317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:38.588 [2024-05-15 18:12:31.022333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.588 [2024-05-15 18:12:31.067802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.588 [2024-05-15 18:12:31.067871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.588 [2024-05-15 18:12:31.067898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.432 ms 00:20:38.588 [2024-05-15 18:12:31.067912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.588 [2024-05-15 18:12:31.068042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.588 [2024-05-15 18:12:31.068062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.589 [2024-05-15 18:12:31.068081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:38.589 [2024-05-15 18:12:31.068093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.589 [2024-05-15 18:12:31.068737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.589 [2024-05-15 18:12:31.068776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.589 [2024-05-15 18:12:31.068797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:20:38.589 [2024-05-15 18:12:31.068809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.589 [2024-05-15 18:12:31.068972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.589 [2024-05-15 18:12:31.068991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.589 [2024-05-15 18:12:31.069017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:38.589 [2024-05-15 18:12:31.069029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.848 [2024-05-15 18:12:31.090445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.848 [2024-05-15 18:12:31.090503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.848 [2024-05-15 18:12:31.090528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.380 ms 00:20:38.848 [2024-05-15 18:12:31.090541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.107289] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:38.849 [2024-05-15 18:12:31.107353] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:38.849 [2024-05-15 18:12:31.107380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.107394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:38.849 [2024-05-15 18:12:31.107410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.655 ms 00:20:38.849 [2024-05-15 18:12:31.107423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.136544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.136593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:38.849 [2024-05-15 18:12:31.136620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.018 ms 00:20:38.849 [2024-05-15 18:12:31.136633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.152210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.152255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:38.849 [2024-05-15 18:12:31.152278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.454 ms 00:20:38.849 [2024-05-15 18:12:31.152305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.167649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.167708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:38.849 [2024-05-15 18:12:31.167730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.196 ms 00:20:38.849 [2024-05-15 18:12:31.167743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.168319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.168353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:38.849 [2024-05-15 18:12:31.168373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:38.849 [2024-05-15 18:12:31.168398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.246943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.247011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:38.849 [2024-05-15 18:12:31.247037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.505 ms 00:20:38.849 [2024-05-15 18:12:31.247051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.259682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:38.849 [2024-05-15 18:12:31.281224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.281317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:38.849 [2024-05-15 18:12:31.281341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.018 ms 00:20:38.849 [2024-05-15 18:12:31.281357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.281505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.281528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:38.849 [2024-05-15 18:12:31.281543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:38.849 [2024-05-15 18:12:31.281560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.281639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.281659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:38.849 [2024-05-15 18:12:31.281673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:38.849 [2024-05-15 18:12:31.281687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.283807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.283859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:38.849 [2024-05-15 18:12:31.283876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:20:38.849 [2024-05-15 18:12:31.283891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.283931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.283950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:38.849 [2024-05-15 18:12:31.283967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:38.849 [2024-05-15 18:12:31.283981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.284029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:38.849 [2024-05-15 18:12:31.284049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.284062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:38.849 [2024-05-15 18:12:31.284079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:38.849 [2024-05-15 18:12:31.284091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.315610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.315852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:38.849 [2024-05-15 18:12:31.316008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.481 ms 00:20:38.849 [2024-05-15 18:12:31.316073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.316352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.849 [2024-05-15 18:12:31.316496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:38.849 [2024-05-15 18:12:31.316529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:38.849 [2024-05-15 18:12:31.316544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.849 [2024-05-15 18:12:31.317701] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.849 [2024-05-15 18:12:31.321921] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 358.397 ms, result 0 00:20:38.849 [2024-05-15 18:12:31.322966] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.108 Some configs were skipped because the RPC state that can call them passed over. 00:20:39.108 18:12:31 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:39.367 [2024-05-15 18:12:31.659642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.367 [2024-05-15 18:12:31.659910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:39.367 [2024-05-15 18:12:31.660072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.589 ms 00:20:39.367 [2024-05-15 18:12:31.660135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.367 [2024-05-15 18:12:31.660338] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 33.249 ms, result 0 00:20:39.367 true 00:20:39.367 18:12:31 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:39.626 [2024-05-15 18:12:31.975732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.626 [2024-05-15 18:12:31.975996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:39.626 [2024-05-15 18:12:31.976141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.538 ms 00:20:39.626 [2024-05-15 18:12:31.976195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.626 [2024-05-15 18:12:31.976289] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 32.092 ms, result 0 00:20:39.626 true 00:20:39.626 18:12:31 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78928 00:20:39.626 18:12:31 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 78928 ']' 00:20:39.626 18:12:31 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 78928 00:20:39.626 18:12:31 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:20:39.626 18:12:32 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:39.626 18:12:32 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78928 00:20:39.626 killing process with pid 78928 00:20:39.626 18:12:32 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:39.626 18:12:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:39.626 18:12:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78928' 00:20:39.626 18:12:32 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 78928 00:20:39.626 18:12:32 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 78928 00:20:40.571 [2024-05-15 18:12:33.049804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.571 [2024-05-15 18:12:33.049923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:40.572 [2024-05-15 18:12:33.049947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.572 [2024-05-15 18:12:33.049961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-05-15 18:12:33.049994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:40.572 [2024-05-15 18:12:33.053758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-05-15 18:12:33.053792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:40.572 [2024-05-15 18:12:33.053842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.738 ms 00:20:40.572 [2024-05-15 18:12:33.053855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-05-15 18:12:33.054183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-05-15 18:12:33.054217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:40.572 [2024-05-15 18:12:33.054232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:40.572 [2024-05-15 18:12:33.054244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-05-15 18:12:33.058453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-05-15 18:12:33.058495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:40.572 [2024-05-15 18:12:33.058515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.180 ms 00:20:40.572 [2024-05-15 18:12:33.058543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.572 [2024-05-15 18:12:33.065977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.572 [2024-05-15 18:12:33.066026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:40.572 [2024-05-15 18:12:33.066047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.383 ms 00:20:40.572 [2024-05-15 18:12:33.066060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.078640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.831 [2024-05-15 18:12:33.078681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:40.831 [2024-05-15 18:12:33.078711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.490 ms 00:20:40.831 [2024-05-15 18:12:33.078723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.087592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.831 [2024-05-15 18:12:33.087635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:40.831 [2024-05-15 18:12:33.087658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.800 ms 00:20:40.831 [2024-05-15 18:12:33.087670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.087831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.831 [2024-05-15 18:12:33.087864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:40.831 [2024-05-15 18:12:33.087892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:40.831 [2024-05-15 18:12:33.087904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.101561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.831 [2024-05-15 18:12:33.101599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:40.831 [2024-05-15 18:12:33.101618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.624 ms 00:20:40.831 [2024-05-15 18:12:33.101630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.114557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.831 [2024-05-15 18:12:33.114597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:40.831 [2024-05-15 18:12:33.114617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.860 ms 00:20:40.831 [2024-05-15 18:12:33.114629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.126936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.831 [2024-05-15 18:12:33.126975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:40.831 [2024-05-15 18:12:33.126995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.220 ms 00:20:40.831 [2024-05-15 18:12:33.127007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.139946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.831 [2024-05-15 18:12:33.139986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:40.831 [2024-05-15 18:12:33.140005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.840 ms 00:20:40.831 [2024-05-15 18:12:33.140016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.831 [2024-05-15 18:12:33.140080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:40.831 [2024-05-15 18:12:33.140105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:40.831 [2024-05-15 18:12:33.140122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:40.831 [2024-05-15 18:12:33.140135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:40.831 [2024-05-15 18:12:33.140149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.140989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:40.832 [2024-05-15 18:12:33.141184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:40.833 [2024-05-15 18:12:33.141532] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:40.833 [2024-05-15 18:12:33.141546] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08a666aa-140a-4706-af52-1f2e14a3178c 00:20:40.833 [2024-05-15 18:12:33.141562] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:40.833 [2024-05-15 18:12:33.141578] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:40.833 [2024-05-15 18:12:33.141591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:40.833 [2024-05-15 18:12:33.141604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:40.833 [2024-05-15 18:12:33.141616] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:40.833 [2024-05-15 18:12:33.141630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:40.833 [2024-05-15 18:12:33.141642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:40.833 [2024-05-15 18:12:33.141655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:40.833 [2024-05-15 18:12:33.141666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:40.833 [2024-05-15 18:12:33.141680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.833 [2024-05-15 18:12:33.141692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:40.833 [2024-05-15 18:12:33.141708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:20:40.833 [2024-05-15 18:12:33.141719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-05-15 18:12:33.159486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.833 [2024-05-15 18:12:33.159524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:40.833 [2024-05-15 18:12:33.159561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.704 ms 00:20:40.833 [2024-05-15 18:12:33.159574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-05-15 18:12:33.159883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.833 [2024-05-15 18:12:33.159908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:40.833 [2024-05-15 18:12:33.159931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:20:40.833 [2024-05-15 18:12:33.159946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-05-15 18:12:33.223948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.833 [2024-05-15 18:12:33.224017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.833 [2024-05-15 18:12:33.224041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.833 [2024-05-15 18:12:33.224055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-05-15 18:12:33.224192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.833 [2024-05-15 18:12:33.224216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.833 [2024-05-15 18:12:33.224232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.833 [2024-05-15 18:12:33.224248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-05-15 18:12:33.224340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.833 [2024-05-15 18:12:33.224362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.833 [2024-05-15 18:12:33.224378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.833 [2024-05-15 18:12:33.224390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.833 [2024-05-15 18:12:33.224425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.833 [2024-05-15 18:12:33.224439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.833 [2024-05-15 18:12:33.224454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.833 [2024-05-15 18:12:33.224466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.338932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.339012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.091 [2024-05-15 18:12:33.339048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.339062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.380848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.380921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.091 [2024-05-15 18:12:33.380959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.380972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.381064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.381097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.091 [2024-05-15 18:12:33.381112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.381138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.381182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.381197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.091 [2024-05-15 18:12:33.381211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.381222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.381404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.381428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.091 [2024-05-15 18:12:33.381444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.381457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.381516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.381535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:41.091 [2024-05-15 18:12:33.381550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.381562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.381616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.381634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.091 [2024-05-15 18:12:33.381649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.381660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.381723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.091 [2024-05-15 18:12:33.381740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.091 [2024-05-15 18:12:33.381756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.091 [2024-05-15 18:12:33.381768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.091 [2024-05-15 18:12:33.381941] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.105 ms, result 0 00:20:42.464 18:12:34 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.464 [2024-05-15 18:12:34.657354] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:42.464 [2024-05-15 18:12:34.657546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78998 ] 00:20:42.464 [2024-05-15 18:12:34.824873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.723 [2024-05-15 18:12:35.066783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.982 [2024-05-15 18:12:35.417902] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.982 [2024-05-15 18:12:35.417997] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.241 [2024-05-15 18:12:35.576790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.576856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:43.242 [2024-05-15 18:12:35.576878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:43.242 [2024-05-15 18:12:35.576891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.580430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.580474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.242 [2024-05-15 18:12:35.580509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.508 ms 00:20:43.242 [2024-05-15 18:12:35.580526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.580683] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:43.242 [2024-05-15 18:12:35.581669] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:43.242 [2024-05-15 18:12:35.581716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.581731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.242 [2024-05-15 18:12:35.581745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:20:43.242 [2024-05-15 18:12:35.581757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.583764] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:43.242 [2024-05-15 18:12:35.600607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.600654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:43.242 [2024-05-15 18:12:35.600673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.844 ms 00:20:43.242 [2024-05-15 18:12:35.600685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.600804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.600830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:43.242 [2024-05-15 18:12:35.600844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:43.242 [2024-05-15 18:12:35.600856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.609526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.609576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.242 [2024-05-15 18:12:35.609608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.608 ms 00:20:43.242 [2024-05-15 18:12:35.609620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.609765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.609788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.242 [2024-05-15 18:12:35.609801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:43.242 [2024-05-15 18:12:35.609813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.609855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.609872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:43.242 [2024-05-15 18:12:35.609885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:43.242 [2024-05-15 18:12:35.609896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.609929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:43.242 [2024-05-15 18:12:35.615048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.615085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.242 [2024-05-15 18:12:35.615118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.128 ms 00:20:43.242 [2024-05-15 18:12:35.615144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.615254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.615272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:43.242 [2024-05-15 18:12:35.615286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:43.242 [2024-05-15 18:12:35.615297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.615329] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:43.242 [2024-05-15 18:12:35.615385] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:43.242 [2024-05-15 18:12:35.615429] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:43.242 [2024-05-15 18:12:35.615453] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:43.242 [2024-05-15 18:12:35.615534] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:43.242 [2024-05-15 18:12:35.615550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:43.242 [2024-05-15 18:12:35.615566] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:43.242 [2024-05-15 18:12:35.615581] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:43.242 [2024-05-15 18:12:35.615595] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:43.242 [2024-05-15 18:12:35.615607] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:43.242 [2024-05-15 18:12:35.615619] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:43.242 [2024-05-15 18:12:35.615630] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:43.242 [2024-05-15 18:12:35.615646] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:43.242 [2024-05-15 18:12:35.615658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.615673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:43.242 [2024-05-15 18:12:35.615686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:20:43.242 [2024-05-15 18:12:35.615697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.615776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.242 [2024-05-15 18:12:35.615792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:43.242 [2024-05-15 18:12:35.615804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:43.242 [2024-05-15 18:12:35.615815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.242 [2024-05-15 18:12:35.615921] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:43.242 [2024-05-15 18:12:35.615946] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:43.242 [2024-05-15 18:12:35.615965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.242 [2024-05-15 18:12:35.615977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.242 [2024-05-15 18:12:35.615988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:43.242 [2024-05-15 18:12:35.615999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:43.242 [2024-05-15 18:12:35.616021] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:43.242 [2024-05-15 18:12:35.616032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.242 [2024-05-15 18:12:35.616053] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:43.242 [2024-05-15 18:12:35.616064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:43.242 [2024-05-15 18:12:35.616075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.242 [2024-05-15 18:12:35.616098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:43.242 [2024-05-15 18:12:35.616110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:43.242 [2024-05-15 18:12:35.616121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:43.242 [2024-05-15 18:12:35.616143] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:43.242 [2024-05-15 18:12:35.616154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:43.242 [2024-05-15 18:12:35.616178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:43.242 [2024-05-15 18:12:35.616193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:43.242 [2024-05-15 18:12:35.616204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:43.242 [2024-05-15 18:12:35.616214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:43.242 [2024-05-15 18:12:35.616235] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:43.242 [2024-05-15 18:12:35.616246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:43.242 [2024-05-15 18:12:35.616267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:43.242 [2024-05-15 18:12:35.616278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:43.242 [2024-05-15 18:12:35.616316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:43.242 [2024-05-15 18:12:35.616328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:43.242 [2024-05-15 18:12:35.616349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:43.242 [2024-05-15 18:12:35.616360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.242 [2024-05-15 18:12:35.616383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:43.242 [2024-05-15 18:12:35.616394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:43.242 [2024-05-15 18:12:35.616404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.242 [2024-05-15 18:12:35.616414] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:43.242 [2024-05-15 18:12:35.616426] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:43.242 [2024-05-15 18:12:35.616437] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.242 [2024-05-15 18:12:35.616449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.242 [2024-05-15 18:12:35.616461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:43.243 [2024-05-15 18:12:35.616472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:43.243 [2024-05-15 18:12:35.616482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:43.243 [2024-05-15 18:12:35.616494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:43.243 [2024-05-15 18:12:35.616505] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:43.243 [2024-05-15 18:12:35.616516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:43.243 [2024-05-15 18:12:35.616529] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:43.243 [2024-05-15 18:12:35.616543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.243 [2024-05-15 18:12:35.616557] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:43.243 [2024-05-15 18:12:35.616569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:43.243 [2024-05-15 18:12:35.616580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:43.243 [2024-05-15 18:12:35.616593] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:43.243 [2024-05-15 18:12:35.616604] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:43.243 [2024-05-15 18:12:35.616615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:43.243 [2024-05-15 18:12:35.616627] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:43.243 [2024-05-15 18:12:35.616638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:43.243 [2024-05-15 18:12:35.616650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:43.243 [2024-05-15 18:12:35.616662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:43.243 [2024-05-15 18:12:35.616673] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:43.243 [2024-05-15 18:12:35.616685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:43.243 [2024-05-15 18:12:35.616697] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:43.243 [2024-05-15 18:12:35.616708] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:43.243 [2024-05-15 18:12:35.616721] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.243 [2024-05-15 18:12:35.616740] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:43.243 [2024-05-15 18:12:35.616752] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:43.243 [2024-05-15 18:12:35.616764] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:43.243 [2024-05-15 18:12:35.616776] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:43.243 [2024-05-15 18:12:35.616793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.616805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:43.243 [2024-05-15 18:12:35.616818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:20:43.243 [2024-05-15 18:12:35.616829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.639284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.639354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.243 [2024-05-15 18:12:35.639392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.385 ms 00:20:43.243 [2024-05-15 18:12:35.639405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.639592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.639612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:43.243 [2024-05-15 18:12:35.639626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:43.243 [2024-05-15 18:12:35.639638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.692084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.692147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.243 [2024-05-15 18:12:35.692167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.411 ms 00:20:43.243 [2024-05-15 18:12:35.692179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.692350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.692372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.243 [2024-05-15 18:12:35.692387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:43.243 [2024-05-15 18:12:35.692398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.693006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.693031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.243 [2024-05-15 18:12:35.693053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:20:43.243 [2024-05-15 18:12:35.693064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.693251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.693280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.243 [2024-05-15 18:12:35.693306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:20:43.243 [2024-05-15 18:12:35.693322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.715095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.715144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.243 [2024-05-15 18:12:35.715162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.740 ms 00:20:43.243 [2024-05-15 18:12:35.715174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.243 [2024-05-15 18:12:35.732237] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:43.243 [2024-05-15 18:12:35.732309] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:43.243 [2024-05-15 18:12:35.732330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.243 [2024-05-15 18:12:35.732344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:43.243 [2024-05-15 18:12:35.732365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.960 ms 00:20:43.243 [2024-05-15 18:12:35.732377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.761868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.761960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:43.503 [2024-05-15 18:12:35.761998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.380 ms 00:20:43.503 [2024-05-15 18:12:35.762012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.779169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.779254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:43.503 [2024-05-15 18:12:35.779274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.984 ms 00:20:43.503 [2024-05-15 18:12:35.779286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.794444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.794490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:43.503 [2024-05-15 18:12:35.794507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.022 ms 00:20:43.503 [2024-05-15 18:12:35.794527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.795068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.795097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:43.503 [2024-05-15 18:12:35.795113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:20:43.503 [2024-05-15 18:12:35.795124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.873727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.873819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:43.503 [2024-05-15 18:12:35.873857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.561 ms 00:20:43.503 [2024-05-15 18:12:35.873869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.886203] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:43.503 [2024-05-15 18:12:35.907751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.907813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:43.503 [2024-05-15 18:12:35.907859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.685 ms 00:20:43.503 [2024-05-15 18:12:35.907872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.908013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.908034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:43.503 [2024-05-15 18:12:35.908049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:43.503 [2024-05-15 18:12:35.908061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.908140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.908158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.503 [2024-05-15 18:12:35.908170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:43.503 [2024-05-15 18:12:35.908182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.910320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.910389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:43.503 [2024-05-15 18:12:35.910405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:20:43.503 [2024-05-15 18:12:35.910417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.910456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.910472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:43.503 [2024-05-15 18:12:35.910491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:43.503 [2024-05-15 18:12:35.910503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.910554] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:43.503 [2024-05-15 18:12:35.910572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.910583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:43.503 [2024-05-15 18:12:35.910595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:43.503 [2024-05-15 18:12:35.910606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.943292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.943368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:43.503 [2024-05-15 18:12:35.943387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.657 ms 00:20:43.503 [2024-05-15 18:12:35.943400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.943532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.503 [2024-05-15 18:12:35.943554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:43.503 [2024-05-15 18:12:35.943568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:43.503 [2024-05-15 18:12:35.943579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.503 [2024-05-15 18:12:35.944751] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:43.503 [2024-05-15 18:12:35.948749] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.626 ms, result 0 00:20:43.503 [2024-05-15 18:12:35.949568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:43.503 [2024-05-15 18:12:35.965476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.463  Copying: 27/256 [MB] (27 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 77/256 [MB] (24 MBps) Copying: 102/256 [MB] (25 MBps) Copying: 127/256 [MB] (24 MBps) Copying: 148/256 [MB] (21 MBps) Copying: 171/256 [MB] (23 MBps) Copying: 195/256 [MB] (23 MBps) Copying: 218/256 [MB] (22 MBps) Copying: 241/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-05-15 18:12:46.883193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.463 [2024-05-15 18:12:46.901084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.463 [2024-05-15 18:12:46.901151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.463 [2024-05-15 18:12:46.901189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.463 [2024-05-15 18:12:46.901202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.463 [2024-05-15 18:12:46.901247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:54.463 [2024-05-15 18:12:46.904993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.463 [2024-05-15 18:12:46.905039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.463 [2024-05-15 18:12:46.905070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:20:54.463 [2024-05-15 18:12:46.905081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.463 [2024-05-15 18:12:46.905408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.463 [2024-05-15 18:12:46.905433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.463 [2024-05-15 18:12:46.905446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:54.463 [2024-05-15 18:12:46.905458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.463 [2024-05-15 18:12:46.909134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.463 [2024-05-15 18:12:46.909163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.463 [2024-05-15 18:12:46.909199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:20:54.463 [2024-05-15 18:12:46.909210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.463 [2024-05-15 18:12:46.916283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.463 [2024-05-15 18:12:46.916344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:54.463 [2024-05-15 18:12:46.916360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:20:54.463 [2024-05-15 18:12:46.916371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.463 [2024-05-15 18:12:46.947452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.463 [2024-05-15 18:12:46.947577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.463 [2024-05-15 18:12:46.947614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.011 ms 00:20:54.463 [2024-05-15 18:12:46.947626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.723 [2024-05-15 18:12:46.966632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.723 [2024-05-15 18:12:46.966703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.723 [2024-05-15 18:12:46.966724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.890 ms 00:20:54.723 [2024-05-15 18:12:46.966751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.723 [2024-05-15 18:12:46.966957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.723 [2024-05-15 18:12:46.966978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.723 [2024-05-15 18:12:46.966996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:54.723 [2024-05-15 18:12:46.967008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.723 [2024-05-15 18:12:46.998200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.723 [2024-05-15 18:12:46.998289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:54.723 [2024-05-15 18:12:46.998353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.167 ms 00:20:54.723 [2024-05-15 18:12:46.998366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.723 [2024-05-15 18:12:47.031522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.723 [2024-05-15 18:12:47.031617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:54.723 [2024-05-15 18:12:47.031649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.079 ms 00:20:54.723 [2024-05-15 18:12:47.031669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.723 [2024-05-15 18:12:47.064166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.723 [2024-05-15 18:12:47.064224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.723 [2024-05-15 18:12:47.064246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.349 ms 00:20:54.723 [2024-05-15 18:12:47.064258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.723 [2024-05-15 18:12:47.095031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.723 [2024-05-15 18:12:47.095118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.723 [2024-05-15 18:12:47.095155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.645 ms 00:20:54.723 [2024-05-15 18:12:47.095167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.723 [2024-05-15 18:12:47.095282] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.723 [2024-05-15 18:12:47.095346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.723 [2024-05-15 18:12:47.095628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.095992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.724 [2024-05-15 18:12:47.096651] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.724 [2024-05-15 18:12:47.096663] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 08a666aa-140a-4706-af52-1f2e14a3178c 00:20:54.724 [2024-05-15 18:12:47.096675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:54.724 [2024-05-15 18:12:47.096686] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:54.724 [2024-05-15 18:12:47.096698] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:54.724 [2024-05-15 18:12:47.096709] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:54.724 [2024-05-15 18:12:47.096720] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.724 [2024-05-15 18:12:47.096732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.724 [2024-05-15 18:12:47.096743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.724 [2024-05-15 18:12:47.096753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.724 [2024-05-15 18:12:47.096763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.724 [2024-05-15 18:12:47.096775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.724 [2024-05-15 18:12:47.096787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.724 [2024-05-15 18:12:47.096804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:20:54.724 [2024-05-15 18:12:47.096816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.724 [2024-05-15 18:12:47.114225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.724 [2024-05-15 18:12:47.114284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.724 [2024-05-15 18:12:47.114324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.378 ms 00:20:54.724 [2024-05-15 18:12:47.114338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.724 [2024-05-15 18:12:47.114637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.724 [2024-05-15 18:12:47.114663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.725 [2024-05-15 18:12:47.114676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:20:54.725 [2024-05-15 18:12:47.114688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.725 [2024-05-15 18:12:47.165547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.725 [2024-05-15 18:12:47.165613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.725 [2024-05-15 18:12:47.165633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.725 [2024-05-15 18:12:47.165646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.725 [2024-05-15 18:12:47.165775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.725 [2024-05-15 18:12:47.165799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.725 [2024-05-15 18:12:47.165812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.725 [2024-05-15 18:12:47.165823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.725 [2024-05-15 18:12:47.165889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.725 [2024-05-15 18:12:47.165907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.725 [2024-05-15 18:12:47.165920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.725 [2024-05-15 18:12:47.165932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.725 [2024-05-15 18:12:47.165957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.725 [2024-05-15 18:12:47.165970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.725 [2024-05-15 18:12:47.165988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.725 [2024-05-15 18:12:47.165999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.273405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.273468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.011 [2024-05-15 18:12:47.273488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.273500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.313632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.313718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.011 [2024-05-15 18:12:47.313754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.313766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.313863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.313880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.011 [2024-05-15 18:12:47.313893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.313906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.313943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.313958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.011 [2024-05-15 18:12:47.313970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.313987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.314110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.314130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.011 [2024-05-15 18:12:47.314143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.314154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.314210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.314227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:55.011 [2024-05-15 18:12:47.314240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.314252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.314306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.314349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.011 [2024-05-15 18:12:47.314365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.314377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.314435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.011 [2024-05-15 18:12:47.314452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.011 [2024-05-15 18:12:47.314464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.011 [2024-05-15 18:12:47.314487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.011 [2024-05-15 18:12:47.314666] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.578 ms, result 0 00:20:56.388 00:20:56.388 00:20:56.388 18:12:48 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:56.647 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:56.647 18:12:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:56.647 18:12:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:56.647 18:12:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:56.647 18:12:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:56.647 18:12:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:56.906 18:12:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:56.906 18:12:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78928 00:20:56.906 18:12:49 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 78928 ']' 00:20:56.906 18:12:49 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 78928 00:20:56.906 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (78928) - No such process 00:20:56.906 Process with pid 78928 is not found 00:20:56.906 18:12:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 78928 is not found' 00:20:56.906 ************************************ 00:20:56.906 END TEST ftl_trim 00:20:56.906 ************************************ 00:20:56.906 00:20:56.906 real 1m12.271s 00:20:56.906 user 1m35.529s 00:20:56.906 sys 0m7.541s 00:20:56.906 18:12:49 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:56.906 18:12:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:56.906 18:12:49 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:56.906 18:12:49 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:56.906 18:12:49 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:56.906 18:12:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:56.906 ************************************ 00:20:56.906 START TEST ftl_restore 00:20:56.906 ************************************ 00:20:56.906 18:12:49 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:56.906 * Looking for test storage... 00:20:56.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.906 18:12:49 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:56.906 18:12:49 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:56.906 18:12:49 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.906 18:12:49 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:56.906 18:12:49 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.QZDoJF7ktB 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79204 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79204 00:20:56.907 18:12:49 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.907 18:12:49 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 79204 ']' 00:20:56.907 18:12:49 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.907 18:12:49 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:56.907 18:12:49 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.907 18:12:49 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:56.907 18:12:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:57.166 [2024-05-15 18:12:49.510825] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:57.166 [2024-05-15 18:12:49.510995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79204 ] 00:20:57.425 [2024-05-15 18:12:49.704691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.683 [2024-05-15 18:12:49.992293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.691 18:12:50 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:58.691 18:12:50 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:20:58.691 18:12:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:58.691 18:12:50 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:58.691 18:12:50 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:58.691 18:12:50 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:58.691 18:12:50 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:58.691 18:12:50 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:58.691 18:12:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:58.950 18:12:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:58.950 18:12:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:58.950 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:20:58.950 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:58.950 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:20:58.950 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:20:58.950 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:59.210 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:59.210 { 00:20:59.210 "name": "nvme0n1", 00:20:59.210 "aliases": [ 00:20:59.210 "008507be-ee83-492f-bf62-dd4070804602" 00:20:59.210 ], 00:20:59.210 "product_name": "NVMe disk", 00:20:59.210 "block_size": 4096, 00:20:59.210 "num_blocks": 1310720, 00:20:59.210 "uuid": "008507be-ee83-492f-bf62-dd4070804602", 00:20:59.210 "assigned_rate_limits": { 00:20:59.210 "rw_ios_per_sec": 0, 00:20:59.210 "rw_mbytes_per_sec": 0, 00:20:59.210 "r_mbytes_per_sec": 0, 00:20:59.210 "w_mbytes_per_sec": 0 00:20:59.210 }, 00:20:59.210 "claimed": true, 00:20:59.210 "claim_type": "read_many_write_one", 00:20:59.210 "zoned": false, 00:20:59.210 "supported_io_types": { 00:20:59.210 "read": true, 00:20:59.210 "write": true, 00:20:59.210 "unmap": true, 00:20:59.210 "write_zeroes": true, 00:20:59.210 "flush": true, 00:20:59.210 "reset": true, 00:20:59.210 "compare": true, 00:20:59.210 "compare_and_write": false, 00:20:59.210 "abort": true, 00:20:59.210 "nvme_admin": true, 00:20:59.210 "nvme_io": true 00:20:59.210 }, 00:20:59.210 "driver_specific": { 00:20:59.210 "nvme": [ 00:20:59.210 { 00:20:59.210 "pci_address": "0000:00:11.0", 00:20:59.210 "trid": { 00:20:59.210 "trtype": "PCIe", 00:20:59.210 "traddr": "0000:00:11.0" 00:20:59.210 }, 00:20:59.210 "ctrlr_data": { 00:20:59.210 "cntlid": 0, 00:20:59.210 "vendor_id": "0x1b36", 00:20:59.210 "model_number": "QEMU NVMe Ctrl", 00:20:59.210 "serial_number": "12341", 00:20:59.210 "firmware_revision": "8.0.0", 00:20:59.210 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:59.210 "oacs": { 00:20:59.210 "security": 0, 00:20:59.210 "format": 1, 00:20:59.210 "firmware": 0, 00:20:59.210 "ns_manage": 1 00:20:59.210 }, 00:20:59.210 "multi_ctrlr": false, 00:20:59.210 "ana_reporting": false 00:20:59.210 }, 00:20:59.210 "vs": { 00:20:59.210 "nvme_version": "1.4" 00:20:59.210 }, 00:20:59.210 "ns_data": { 00:20:59.210 "id": 1, 00:20:59.210 "can_share": false 00:20:59.210 } 00:20:59.210 } 00:20:59.210 ], 00:20:59.210 "mp_policy": "active_passive" 00:20:59.210 } 00:20:59.210 } 00:20:59.210 ]' 00:20:59.210 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:59.210 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:20:59.210 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:59.210 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:20:59.210 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:20:59.210 18:12:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:20:59.210 18:12:51 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:59.210 18:12:51 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:59.210 18:12:51 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:59.210 18:12:51 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:59.210 18:12:51 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:59.470 18:12:51 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=608eade5-b99f-42d4-92d3-fafe41d7da1f 00:20:59.470 18:12:51 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:59.470 18:12:51 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 608eade5-b99f-42d4-92d3-fafe41d7da1f 00:20:59.729 18:12:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:59.987 18:12:52 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0c15199b-6829-48a9-bd36-5f1e3a9e6738 00:20:59.987 18:12:52 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c15199b-6829-48a9-bd36-5f1e3a9e6738 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=75a5248c-8772-41ea-b648-3794ceade9ad 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 75a5248c-8772-41ea-b648-3794ceade9ad 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=75a5248c-8772-41ea-b648-3794ceade9ad 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:00.247 18:12:52 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 75a5248c-8772-41ea-b648-3794ceade9ad 00:21:00.247 18:12:52 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=75a5248c-8772-41ea-b648-3794ceade9ad 00:21:00.247 18:12:52 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:00.247 18:12:52 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:21:00.247 18:12:52 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:21:00.247 18:12:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75a5248c-8772-41ea-b648-3794ceade9ad 00:21:00.505 18:12:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:00.505 { 00:21:00.505 "name": "75a5248c-8772-41ea-b648-3794ceade9ad", 00:21:00.505 "aliases": [ 00:21:00.505 "lvs/nvme0n1p0" 00:21:00.505 ], 00:21:00.506 "product_name": "Logical Volume", 00:21:00.506 "block_size": 4096, 00:21:00.506 "num_blocks": 26476544, 00:21:00.506 "uuid": "75a5248c-8772-41ea-b648-3794ceade9ad", 00:21:00.506 "assigned_rate_limits": { 00:21:00.506 "rw_ios_per_sec": 0, 00:21:00.506 "rw_mbytes_per_sec": 0, 00:21:00.506 "r_mbytes_per_sec": 0, 00:21:00.506 "w_mbytes_per_sec": 0 00:21:00.506 }, 00:21:00.506 "claimed": false, 00:21:00.506 "zoned": false, 00:21:00.506 "supported_io_types": { 00:21:00.506 "read": true, 00:21:00.506 "write": true, 00:21:00.506 "unmap": true, 00:21:00.506 "write_zeroes": true, 00:21:00.506 "flush": false, 00:21:00.506 "reset": true, 00:21:00.506 "compare": false, 00:21:00.506 "compare_and_write": false, 00:21:00.506 "abort": false, 00:21:00.506 "nvme_admin": false, 00:21:00.506 "nvme_io": false 00:21:00.506 }, 00:21:00.506 "driver_specific": { 00:21:00.506 "lvol": { 00:21:00.506 "lvol_store_uuid": "0c15199b-6829-48a9-bd36-5f1e3a9e6738", 00:21:00.506 "base_bdev": "nvme0n1", 00:21:00.506 "thin_provision": true, 00:21:00.506 "num_allocated_clusters": 0, 00:21:00.506 "snapshot": false, 00:21:00.506 "clone": false, 00:21:00.506 "esnap_clone": false 00:21:00.506 } 00:21:00.506 } 00:21:00.506 } 00:21:00.506 ]' 00:21:00.506 18:12:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:00.763 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:21:00.763 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:00.763 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:00.763 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:00.763 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:21:00.763 18:12:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:00.763 18:12:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:00.763 18:12:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:01.021 18:12:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:01.021 18:12:53 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:01.021 18:12:53 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 75a5248c-8772-41ea-b648-3794ceade9ad 00:21:01.021 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=75a5248c-8772-41ea-b648-3794ceade9ad 00:21:01.021 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:01.021 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:21:01.021 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:21:01.021 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75a5248c-8772-41ea-b648-3794ceade9ad 00:21:01.281 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:01.281 { 00:21:01.281 "name": "75a5248c-8772-41ea-b648-3794ceade9ad", 00:21:01.281 "aliases": [ 00:21:01.281 "lvs/nvme0n1p0" 00:21:01.281 ], 00:21:01.281 "product_name": "Logical Volume", 00:21:01.281 "block_size": 4096, 00:21:01.281 "num_blocks": 26476544, 00:21:01.281 "uuid": "75a5248c-8772-41ea-b648-3794ceade9ad", 00:21:01.281 "assigned_rate_limits": { 00:21:01.281 "rw_ios_per_sec": 0, 00:21:01.281 "rw_mbytes_per_sec": 0, 00:21:01.281 "r_mbytes_per_sec": 0, 00:21:01.281 "w_mbytes_per_sec": 0 00:21:01.281 }, 00:21:01.281 "claimed": false, 00:21:01.281 "zoned": false, 00:21:01.281 "supported_io_types": { 00:21:01.281 "read": true, 00:21:01.281 "write": true, 00:21:01.281 "unmap": true, 00:21:01.281 "write_zeroes": true, 00:21:01.281 "flush": false, 00:21:01.281 "reset": true, 00:21:01.281 "compare": false, 00:21:01.281 "compare_and_write": false, 00:21:01.281 "abort": false, 00:21:01.281 "nvme_admin": false, 00:21:01.281 "nvme_io": false 00:21:01.281 }, 00:21:01.281 "driver_specific": { 00:21:01.281 "lvol": { 00:21:01.281 "lvol_store_uuid": "0c15199b-6829-48a9-bd36-5f1e3a9e6738", 00:21:01.281 "base_bdev": "nvme0n1", 00:21:01.281 "thin_provision": true, 00:21:01.281 "num_allocated_clusters": 0, 00:21:01.281 "snapshot": false, 00:21:01.281 "clone": false, 00:21:01.281 "esnap_clone": false 00:21:01.281 } 00:21:01.281 } 00:21:01.281 } 00:21:01.281 ]' 00:21:01.281 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:01.281 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:21:01.281 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:01.539 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:01.539 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:01.539 18:12:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:21:01.539 18:12:53 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:01.539 18:12:53 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:01.798 18:12:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:01.798 18:12:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 75a5248c-8772-41ea-b648-3794ceade9ad 00:21:01.798 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=75a5248c-8772-41ea-b648-3794ceade9ad 00:21:01.798 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:01.798 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:21:01.798 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:21:01.798 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75a5248c-8772-41ea-b648-3794ceade9ad 00:21:02.057 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:02.057 { 00:21:02.057 "name": "75a5248c-8772-41ea-b648-3794ceade9ad", 00:21:02.057 "aliases": [ 00:21:02.057 "lvs/nvme0n1p0" 00:21:02.057 ], 00:21:02.057 "product_name": "Logical Volume", 00:21:02.057 "block_size": 4096, 00:21:02.057 "num_blocks": 26476544, 00:21:02.057 "uuid": "75a5248c-8772-41ea-b648-3794ceade9ad", 00:21:02.057 "assigned_rate_limits": { 00:21:02.057 "rw_ios_per_sec": 0, 00:21:02.057 "rw_mbytes_per_sec": 0, 00:21:02.057 "r_mbytes_per_sec": 0, 00:21:02.057 "w_mbytes_per_sec": 0 00:21:02.057 }, 00:21:02.057 "claimed": false, 00:21:02.057 "zoned": false, 00:21:02.057 "supported_io_types": { 00:21:02.057 "read": true, 00:21:02.057 "write": true, 00:21:02.057 "unmap": true, 00:21:02.057 "write_zeroes": true, 00:21:02.057 "flush": false, 00:21:02.057 "reset": true, 00:21:02.057 "compare": false, 00:21:02.057 "compare_and_write": false, 00:21:02.057 "abort": false, 00:21:02.057 "nvme_admin": false, 00:21:02.057 "nvme_io": false 00:21:02.057 }, 00:21:02.057 "driver_specific": { 00:21:02.057 "lvol": { 00:21:02.057 "lvol_store_uuid": "0c15199b-6829-48a9-bd36-5f1e3a9e6738", 00:21:02.057 "base_bdev": "nvme0n1", 00:21:02.057 "thin_provision": true, 00:21:02.057 "num_allocated_clusters": 0, 00:21:02.057 "snapshot": false, 00:21:02.057 "clone": false, 00:21:02.057 "esnap_clone": false 00:21:02.057 } 00:21:02.057 } 00:21:02.057 } 00:21:02.057 ]' 00:21:02.057 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:02.057 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:21:02.057 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:02.057 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:02.057 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:02.057 18:12:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:21:02.057 18:12:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:02.057 18:12:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 75a5248c-8772-41ea-b648-3794ceade9ad --l2p_dram_limit 10' 00:21:02.057 18:12:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:02.057 18:12:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:02.057 18:12:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:02.057 18:12:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:02.057 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:02.057 18:12:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 75a5248c-8772-41ea-b648-3794ceade9ad --l2p_dram_limit 10 -c nvc0n1p0 00:21:02.326 [2024-05-15 18:12:54.633711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.326 [2024-05-15 18:12:54.633780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:02.326 [2024-05-15 18:12:54.633823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:02.326 [2024-05-15 18:12:54.633836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.326 [2024-05-15 18:12:54.633913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.633931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.327 [2024-05-15 18:12:54.633950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:02.327 [2024-05-15 18:12:54.633962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.633992] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:02.327 [2024-05-15 18:12:54.635046] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:02.327 [2024-05-15 18:12:54.635096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.635113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.327 [2024-05-15 18:12:54.635132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:21:02.327 [2024-05-15 18:12:54.635143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.635370] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7bf88be1-4a70-4e52-92b3-ca484d9799c3 00:21:02.327 [2024-05-15 18:12:54.637252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.637324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:02.327 [2024-05-15 18:12:54.637346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:02.327 [2024-05-15 18:12:54.637361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.647237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.647346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.327 [2024-05-15 18:12:54.647367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.817 ms 00:21:02.327 [2024-05-15 18:12:54.647381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.647547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.647572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.327 [2024-05-15 18:12:54.647586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:02.327 [2024-05-15 18:12:54.647604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.647699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.647725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.327 [2024-05-15 18:12:54.647753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:02.327 [2024-05-15 18:12:54.647766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.647816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:02.327 [2024-05-15 18:12:54.653204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.653245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.327 [2024-05-15 18:12:54.653282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.392 ms 00:21:02.327 [2024-05-15 18:12:54.653293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.653385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.653403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.327 [2024-05-15 18:12:54.653419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:02.327 [2024-05-15 18:12:54.653431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.653489] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:02.327 [2024-05-15 18:12:54.653642] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:02.327 [2024-05-15 18:12:54.653664] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.327 [2024-05-15 18:12:54.653681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:02.327 [2024-05-15 18:12:54.653717] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.327 [2024-05-15 18:12:54.653731] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.327 [2024-05-15 18:12:54.653746] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:02.327 [2024-05-15 18:12:54.653758] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.327 [2024-05-15 18:12:54.653771] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:02.327 [2024-05-15 18:12:54.653782] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:02.327 [2024-05-15 18:12:54.653796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.653808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.327 [2024-05-15 18:12:54.653828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:21:02.327 [2024-05-15 18:12:54.653840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.653922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.327 [2024-05-15 18:12:54.653965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.327 [2024-05-15 18:12:54.653981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:02.327 [2024-05-15 18:12:54.653992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.327 [2024-05-15 18:12:54.654076] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.327 [2024-05-15 18:12:54.654092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.327 [2024-05-15 18:12:54.654109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.327 [2024-05-15 18:12:54.654124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.327 [2024-05-15 18:12:54.654161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.327 [2024-05-15 18:12:54.654172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.327 [2024-05-15 18:12:54.654185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:02.327 [2024-05-15 18:12:54.654195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.327 [2024-05-15 18:12:54.654208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:02.327 [2024-05-15 18:12:54.654219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.327 [2024-05-15 18:12:54.654231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.327 [2024-05-15 18:12:54.654243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:02.327 [2024-05-15 18:12:54.654255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.327 [2024-05-15 18:12:54.654266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.327 [2024-05-15 18:12:54.654281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:02.327 [2024-05-15 18:12:54.654291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.327 [2024-05-15 18:12:54.654303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.327 [2024-05-15 18:12:54.654314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:02.327 [2024-05-15 18:12:54.654328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.327 [2024-05-15 18:12:54.654340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:02.327 [2024-05-15 18:12:54.654627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:02.327 [2024-05-15 18:12:54.654696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:02.327 [2024-05-15 18:12:54.654744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.327 [2024-05-15 18:12:54.654785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:02.327 [2024-05-15 18:12:54.654826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:02.327 [2024-05-15 18:12:54.654944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.327 [2024-05-15 18:12:54.655009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:02.327 [2024-05-15 18:12:54.655050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:02.327 [2024-05-15 18:12:54.655090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.327 [2024-05-15 18:12:54.655234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:02.327 [2024-05-15 18:12:54.655277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:02.327 [2024-05-15 18:12:54.655335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.327 [2024-05-15 18:12:54.655466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:02.327 [2024-05-15 18:12:54.655516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:02.327 [2024-05-15 18:12:54.655562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.327 [2024-05-15 18:12:54.655601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:02.327 [2024-05-15 18:12:54.655733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.327 [2024-05-15 18:12:54.655783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.327 [2024-05-15 18:12:54.655826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:02.327 [2024-05-15 18:12:54.655955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.327 [2024-05-15 18:12:54.656078] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.327 [2024-05-15 18:12:54.656103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.327 [2024-05-15 18:12:54.656120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.327 [2024-05-15 18:12:54.656132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.327 [2024-05-15 18:12:54.656147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.327 [2024-05-15 18:12:54.656159] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.327 [2024-05-15 18:12:54.656172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.327 [2024-05-15 18:12:54.656183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.327 [2024-05-15 18:12:54.656196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.327 [2024-05-15 18:12:54.656207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.327 [2024-05-15 18:12:54.656227] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.327 [2024-05-15 18:12:54.656256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.327 [2024-05-15 18:12:54.656274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:02.327 [2024-05-15 18:12:54.656287] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:02.328 [2024-05-15 18:12:54.656317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:02.328 [2024-05-15 18:12:54.656331] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:02.328 [2024-05-15 18:12:54.656345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:02.328 [2024-05-15 18:12:54.656357] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:02.328 [2024-05-15 18:12:54.656370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:02.328 [2024-05-15 18:12:54.656382] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:02.328 [2024-05-15 18:12:54.656396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:02.328 [2024-05-15 18:12:54.656408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:02.328 [2024-05-15 18:12:54.656422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:02.328 [2024-05-15 18:12:54.656434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:02.328 [2024-05-15 18:12:54.656449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:02.328 [2024-05-15 18:12:54.656460] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.328 [2024-05-15 18:12:54.656481] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.328 [2024-05-15 18:12:54.656494] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.328 [2024-05-15 18:12:54.656509] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.328 [2024-05-15 18:12:54.656521] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.328 [2024-05-15 18:12:54.656536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.328 [2024-05-15 18:12:54.656551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.656566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.328 [2024-05-15 18:12:54.656582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.520 ms 00:21:02.328 [2024-05-15 18:12:54.656596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.679146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.679239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.328 [2024-05-15 18:12:54.679261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.479 ms 00:21:02.328 [2024-05-15 18:12:54.679275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.679421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.679445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.328 [2024-05-15 18:12:54.679459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:02.328 [2024-05-15 18:12:54.679475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.721814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.721892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.328 [2024-05-15 18:12:54.721914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.254 ms 00:21:02.328 [2024-05-15 18:12:54.721929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.721995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.722013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.328 [2024-05-15 18:12:54.722026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.328 [2024-05-15 18:12:54.722040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.722751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.722796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.328 [2024-05-15 18:12:54.722827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:21:02.328 [2024-05-15 18:12:54.722844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.723001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.723023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.328 [2024-05-15 18:12:54.723036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:02.328 [2024-05-15 18:12:54.723053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.745049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.745115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.328 [2024-05-15 18:12:54.745136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.968 ms 00:21:02.328 [2024-05-15 18:12:54.745151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.328 [2024-05-15 18:12:54.759491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.328 [2024-05-15 18:12:54.763628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.328 [2024-05-15 18:12:54.763669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.328 [2024-05-15 18:12:54.763711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.330 ms 00:21:02.328 [2024-05-15 18:12:54.763724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.588 [2024-05-15 18:12:54.842503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.588 [2024-05-15 18:12:54.842576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:02.588 [2024-05-15 18:12:54.842617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.712 ms 00:21:02.588 [2024-05-15 18:12:54.842629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.588 [2024-05-15 18:12:54.842691] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:21:02.588 [2024-05-15 18:12:54.842711] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:21:05.928 [2024-05-15 18:12:57.634425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.634503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:05.928 [2024-05-15 18:12:57.634546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2791.717 ms 00:21:05.928 [2024-05-15 18:12:57.634559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.634813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.634839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.928 [2024-05-15 18:12:57.634861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:21:05.928 [2024-05-15 18:12:57.634883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.668499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.668560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:05.928 [2024-05-15 18:12:57.668585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.529 ms 00:21:05.928 [2024-05-15 18:12:57.668598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.702488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.702544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:05.928 [2024-05-15 18:12:57.702573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.831 ms 00:21:05.928 [2024-05-15 18:12:57.702589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.703109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.703160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.928 [2024-05-15 18:12:57.703187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:21:05.928 [2024-05-15 18:12:57.703203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.778400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.778478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:05.928 [2024-05-15 18:12:57.778540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.112 ms 00:21:05.928 [2024-05-15 18:12:57.778569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.808874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.808951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:05.928 [2024-05-15 18:12:57.808992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.236 ms 00:21:05.928 [2024-05-15 18:12:57.809004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.811688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.811933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:05.928 [2024-05-15 18:12:57.812059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:21:05.928 [2024-05-15 18:12:57.812084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.843272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.843492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:05.928 [2024-05-15 18:12:57.843662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.065 ms 00:21:05.928 [2024-05-15 18:12:57.843715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.843943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.844003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:05.928 [2024-05-15 18:12:57.844170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:05.928 [2024-05-15 18:12:57.844224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.844434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.928 [2024-05-15 18:12:57.844568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:05.928 [2024-05-15 18:12:57.844682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:05.928 [2024-05-15 18:12:57.844732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.928 [2024-05-15 18:12:57.846126] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3211.852 ms, result 0 00:21:05.928 { 00:21:05.928 "name": "ftl0", 00:21:05.928 "uuid": "7bf88be1-4a70-4e52-92b3-ca484d9799c3" 00:21:05.928 } 00:21:05.928 18:12:57 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:05.928 18:12:57 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:05.928 18:12:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:05.928 18:12:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:06.203 [2024-05-15 18:12:58.412839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.412908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:06.203 [2024-05-15 18:12:58.412931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:06.203 [2024-05-15 18:12:58.412949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.412990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:06.203 [2024-05-15 18:12:58.416637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.416672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:06.203 [2024-05-15 18:12:58.416709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:21:06.203 [2024-05-15 18:12:58.416721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.417055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.417075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:06.203 [2024-05-15 18:12:58.417090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:21:06.203 [2024-05-15 18:12:58.417102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.420358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.420406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:06.203 [2024-05-15 18:12:58.420440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:21:06.203 [2024-05-15 18:12:58.420452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.426684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.426720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:06.203 [2024-05-15 18:12:58.426758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:21:06.203 [2024-05-15 18:12:58.426770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.457283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.457340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.203 [2024-05-15 18:12:58.457379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.407 ms 00:21:06.203 [2024-05-15 18:12:58.457390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.474998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.475038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.203 [2024-05-15 18:12:58.475076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.555 ms 00:21:06.203 [2024-05-15 18:12:58.475091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.475270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.475290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:06.203 [2024-05-15 18:12:58.475350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:06.203 [2024-05-15 18:12:58.475363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.503672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.503719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:06.203 [2024-05-15 18:12:58.503758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.277 ms 00:21:06.203 [2024-05-15 18:12:58.503770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.532742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.532800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:06.203 [2024-05-15 18:12:58.532839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.918 ms 00:21:06.203 [2024-05-15 18:12:58.532851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.560853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.560897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:06.203 [2024-05-15 18:12:58.560935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.948 ms 00:21:06.203 [2024-05-15 18:12:58.560947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.588936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.203 [2024-05-15 18:12:58.588976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:06.203 [2024-05-15 18:12:58.589026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.873 ms 00:21:06.203 [2024-05-15 18:12:58.589037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.203 [2024-05-15 18:12:58.589085] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:06.203 [2024-05-15 18:12:58.589107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:06.203 [2024-05-15 18:12:58.589224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.589985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:06.204 [2024-05-15 18:12:58.590585] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:06.204 [2024-05-15 18:12:58.590600] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7bf88be1-4a70-4e52-92b3-ca484d9799c3 00:21:06.204 [2024-05-15 18:12:58.590612] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:06.205 [2024-05-15 18:12:58.590626] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:06.205 [2024-05-15 18:12:58.590637] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:06.205 [2024-05-15 18:12:58.590655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:06.205 [2024-05-15 18:12:58.590666] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:06.205 [2024-05-15 18:12:58.590681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:06.205 [2024-05-15 18:12:58.590692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:06.205 [2024-05-15 18:12:58.590705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:06.205 [2024-05-15 18:12:58.590715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:06.205 [2024-05-15 18:12:58.590729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.205 [2024-05-15 18:12:58.590741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:06.205 [2024-05-15 18:12:58.590758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:21:06.205 [2024-05-15 18:12:58.590770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.205 [2024-05-15 18:12:58.607440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.205 [2024-05-15 18:12:58.607506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:06.205 [2024-05-15 18:12:58.607544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.598 ms 00:21:06.205 [2024-05-15 18:12:58.607572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.205 [2024-05-15 18:12:58.607886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.205 [2024-05-15 18:12:58.607904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:06.205 [2024-05-15 18:12:58.607920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:21:06.205 [2024-05-15 18:12:58.607932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.205 [2024-05-15 18:12:58.663134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.205 [2024-05-15 18:12:58.663203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.205 [2024-05-15 18:12:58.663243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.205 [2024-05-15 18:12:58.663256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.205 [2024-05-15 18:12:58.663383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.205 [2024-05-15 18:12:58.663402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.205 [2024-05-15 18:12:58.663421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.205 [2024-05-15 18:12:58.663433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.205 [2024-05-15 18:12:58.663569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.205 [2024-05-15 18:12:58.663590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.205 [2024-05-15 18:12:58.663609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.205 [2024-05-15 18:12:58.663621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.205 [2024-05-15 18:12:58.663651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.205 [2024-05-15 18:12:58.663665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.205 [2024-05-15 18:12:58.663680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.205 [2024-05-15 18:12:58.663692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.464 [2024-05-15 18:12:58.766559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.464 [2024-05-15 18:12:58.766629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.464 [2024-05-15 18:12:58.766667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.464 [2024-05-15 18:12:58.766679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.464 [2024-05-15 18:12:58.801730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.464 [2024-05-15 18:12:58.801781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.464 [2024-05-15 18:12:58.801820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.464 [2024-05-15 18:12:58.801832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.464 [2024-05-15 18:12:58.801936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.464 [2024-05-15 18:12:58.801954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.464 [2024-05-15 18:12:58.801968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.464 [2024-05-15 18:12:58.801982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.464 [2024-05-15 18:12:58.802045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.464 [2024-05-15 18:12:58.802061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.464 [2024-05-15 18:12:58.802074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.464 [2024-05-15 18:12:58.802085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.464 [2024-05-15 18:12:58.802203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.465 [2024-05-15 18:12:58.802220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.465 [2024-05-15 18:12:58.802234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.465 [2024-05-15 18:12:58.802248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.465 [2024-05-15 18:12:58.802299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.465 [2024-05-15 18:12:58.802355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:06.465 [2024-05-15 18:12:58.802375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.465 [2024-05-15 18:12:58.802386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.465 [2024-05-15 18:12:58.802439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.465 [2024-05-15 18:12:58.802453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.465 [2024-05-15 18:12:58.802466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.465 [2024-05-15 18:12:58.802477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.465 [2024-05-15 18:12:58.802568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.465 [2024-05-15 18:12:58.802601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.465 [2024-05-15 18:12:58.802615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.465 [2024-05-15 18:12:58.802627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.465 [2024-05-15 18:12:58.802810] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.921 ms, result 0 00:21:06.465 true 00:21:06.465 18:12:58 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79204 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 79204 ']' 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 79204 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79204 00:21:06.465 killing process with pid 79204 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79204' 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 79204 00:21:06.465 18:12:58 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 79204 00:21:11.733 18:13:03 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:15.926 262144+0 records in 00:21:15.926 262144+0 records out 00:21:15.926 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.52946 s, 237 MB/s 00:21:15.926 18:13:08 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:18.459 18:13:10 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:18.459 [2024-05-15 18:13:10.430018] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:21:18.459 [2024-05-15 18:13:10.430171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79451 ] 00:21:18.459 [2024-05-15 18:13:10.598689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.459 [2024-05-15 18:13:10.863836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.721 [2024-05-15 18:13:11.213882] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:18.721 [2024-05-15 18:13:11.213988] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:18.981 [2024-05-15 18:13:11.370584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.370658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:18.981 [2024-05-15 18:13:11.370694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:18.981 [2024-05-15 18:13:11.370712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.370780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.370801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.981 [2024-05-15 18:13:11.370814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:18.981 [2024-05-15 18:13:11.370831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.370861] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:18.981 [2024-05-15 18:13:11.371764] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:18.981 [2024-05-15 18:13:11.371804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.371819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.981 [2024-05-15 18:13:11.371832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:21:18.981 [2024-05-15 18:13:11.371844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.373867] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:18.981 [2024-05-15 18:13:11.390633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.390693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:18.981 [2024-05-15 18:13:11.390743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.767 ms 00:21:18.981 [2024-05-15 18:13:11.390755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.390851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.390871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:18.981 [2024-05-15 18:13:11.390883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:18.981 [2024-05-15 18:13:11.390894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.400341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.400569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.981 [2024-05-15 18:13:11.400685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.344 ms 00:21:18.981 [2024-05-15 18:13:11.400735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.400871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.400935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.981 [2024-05-15 18:13:11.400997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:18.981 [2024-05-15 18:13:11.401036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.401136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.401259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:18.981 [2024-05-15 18:13:11.401338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:18.981 [2024-05-15 18:13:11.401381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.401472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:18.981 [2024-05-15 18:13:11.406854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.407041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.981 [2024-05-15 18:13:11.407176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.392 ms 00:21:18.981 [2024-05-15 18:13:11.407239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.407325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.407411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:18.981 [2024-05-15 18:13:11.407462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:18.981 [2024-05-15 18:13:11.407498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.407595] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:18.981 [2024-05-15 18:13:11.407660] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:18.981 [2024-05-15 18:13:11.407813] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:18.981 [2024-05-15 18:13:11.407898] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:18.981 [2024-05-15 18:13:11.408026] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:18.981 [2024-05-15 18:13:11.408100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:18.981 [2024-05-15 18:13:11.408158] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:18.981 [2024-05-15 18:13:11.408290] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:18.981 [2024-05-15 18:13:11.408480] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:18.981 [2024-05-15 18:13:11.408540] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:18.981 [2024-05-15 18:13:11.408592] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:18.981 [2024-05-15 18:13:11.408635] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:18.981 [2024-05-15 18:13:11.408728] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:18.981 [2024-05-15 18:13:11.408776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.408816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:18.981 [2024-05-15 18:13:11.408855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:21:18.981 [2024-05-15 18:13:11.408891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.409001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.981 [2024-05-15 18:13:11.409159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:18.981 [2024-05-15 18:13:11.409208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:18.981 [2024-05-15 18:13:11.409247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.981 [2024-05-15 18:13:11.409380] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:18.981 [2024-05-15 18:13:11.409433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:18.981 [2024-05-15 18:13:11.409482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.981 [2024-05-15 18:13:11.409577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.981 [2024-05-15 18:13:11.409687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:18.981 [2024-05-15 18:13:11.409741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:18.981 [2024-05-15 18:13:11.409830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:18.981 [2024-05-15 18:13:11.409851] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:18.981 [2024-05-15 18:13:11.409863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:18.981 [2024-05-15 18:13:11.409874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.981 [2024-05-15 18:13:11.409886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:18.981 [2024-05-15 18:13:11.409896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:18.981 [2024-05-15 18:13:11.409907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.981 [2024-05-15 18:13:11.409917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:18.981 [2024-05-15 18:13:11.409928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:18.981 [2024-05-15 18:13:11.409955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.981 [2024-05-15 18:13:11.409967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:18.981 [2024-05-15 18:13:11.409978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:18.981 [2024-05-15 18:13:11.409989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.982 [2024-05-15 18:13:11.410000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:18.982 [2024-05-15 18:13:11.410012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:18.982 [2024-05-15 18:13:11.410023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:18.982 [2024-05-15 18:13:11.410034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:18.982 [2024-05-15 18:13:11.410045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:18.982 [2024-05-15 18:13:11.410055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:18.982 [2024-05-15 18:13:11.410066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:18.982 [2024-05-15 18:13:11.410077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:18.982 [2024-05-15 18:13:11.410088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:18.982 [2024-05-15 18:13:11.410098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:18.982 [2024-05-15 18:13:11.410109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:18.982 [2024-05-15 18:13:11.410120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:18.982 [2024-05-15 18:13:11.410134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:18.982 [2024-05-15 18:13:11.410146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:18.982 [2024-05-15 18:13:11.410158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:18.982 [2024-05-15 18:13:11.410169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:18.982 [2024-05-15 18:13:11.410180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:18.982 [2024-05-15 18:13:11.410191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.982 [2024-05-15 18:13:11.410202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:18.982 [2024-05-15 18:13:11.410213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:18.982 [2024-05-15 18:13:11.410223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.982 [2024-05-15 18:13:11.410234] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:18.982 [2024-05-15 18:13:11.410247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:18.982 [2024-05-15 18:13:11.410264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.982 [2024-05-15 18:13:11.410277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.982 [2024-05-15 18:13:11.410289] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:18.982 [2024-05-15 18:13:11.410325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:18.982 [2024-05-15 18:13:11.410343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:18.982 [2024-05-15 18:13:11.410355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:18.982 [2024-05-15 18:13:11.410365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:18.982 [2024-05-15 18:13:11.410377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:18.982 [2024-05-15 18:13:11.410390] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:18.982 [2024-05-15 18:13:11.410405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.982 [2024-05-15 18:13:11.410418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:18.982 [2024-05-15 18:13:11.410431] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:18.982 [2024-05-15 18:13:11.410443] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:18.982 [2024-05-15 18:13:11.410455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:18.982 [2024-05-15 18:13:11.410467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:18.982 [2024-05-15 18:13:11.410480] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:18.982 [2024-05-15 18:13:11.410493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:18.982 [2024-05-15 18:13:11.410504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:18.982 [2024-05-15 18:13:11.410517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:18.982 [2024-05-15 18:13:11.410529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:18.982 [2024-05-15 18:13:11.410542] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:18.982 [2024-05-15 18:13:11.410555] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:18.982 [2024-05-15 18:13:11.410568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:18.982 [2024-05-15 18:13:11.410580] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:18.982 [2024-05-15 18:13:11.410594] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.982 [2024-05-15 18:13:11.410607] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:18.982 [2024-05-15 18:13:11.410620] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:18.982 [2024-05-15 18:13:11.410632] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:18.982 [2024-05-15 18:13:11.410646] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:18.982 [2024-05-15 18:13:11.410661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.982 [2024-05-15 18:13:11.410674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:18.982 [2024-05-15 18:13:11.410687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:21:18.982 [2024-05-15 18:13:11.410698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.982 [2024-05-15 18:13:11.433265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.982 [2024-05-15 18:13:11.433346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.982 [2024-05-15 18:13:11.433383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.498 ms 00:21:18.982 [2024-05-15 18:13:11.433396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.982 [2024-05-15 18:13:11.433526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.982 [2024-05-15 18:13:11.433543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:18.982 [2024-05-15 18:13:11.433556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:18.982 [2024-05-15 18:13:11.433568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.488896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.488967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.240 [2024-05-15 18:13:11.488994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.245 ms 00:21:19.240 [2024-05-15 18:13:11.489007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.489078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.489096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.240 [2024-05-15 18:13:11.489109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:19.240 [2024-05-15 18:13:11.489121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.489771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.489797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.240 [2024-05-15 18:13:11.489812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:21:19.240 [2024-05-15 18:13:11.489830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.489985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.490005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.240 [2024-05-15 18:13:11.490018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:19.240 [2024-05-15 18:13:11.490029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.510711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.510781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.240 [2024-05-15 18:13:11.510804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.651 ms 00:21:19.240 [2024-05-15 18:13:11.510821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.527793] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:19.240 [2024-05-15 18:13:11.527836] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:19.240 [2024-05-15 18:13:11.527894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.527908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:19.240 [2024-05-15 18:13:11.527922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.886 ms 00:21:19.240 [2024-05-15 18:13:11.527934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.557425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.557470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:19.240 [2024-05-15 18:13:11.557489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.443 ms 00:21:19.240 [2024-05-15 18:13:11.557501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.574323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.574365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:19.240 [2024-05-15 18:13:11.574390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.789 ms 00:21:19.240 [2024-05-15 18:13:11.574407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.590141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.590190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:19.240 [2024-05-15 18:13:11.590223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.679 ms 00:21:19.240 [2024-05-15 18:13:11.590234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.590756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.590786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:19.240 [2024-05-15 18:13:11.590800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:21:19.240 [2024-05-15 18:13:11.590813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.674553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.674662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:19.240 [2024-05-15 18:13:11.674684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.707 ms 00:21:19.240 [2024-05-15 18:13:11.674697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.688244] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:19.240 [2024-05-15 18:13:11.692485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.692519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:19.240 [2024-05-15 18:13:11.692560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:21:19.240 [2024-05-15 18:13:11.692604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.692727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.692747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:19.240 [2024-05-15 18:13:11.692761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:19.240 [2024-05-15 18:13:11.692773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.692876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.692894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:19.240 [2024-05-15 18:13:11.692907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:19.240 [2024-05-15 18:13:11.692920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.695132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.695182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:19.240 [2024-05-15 18:13:11.695208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:21:19.240 [2024-05-15 18:13:11.695222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.695261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.695277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:19.240 [2024-05-15 18:13:11.695290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:19.240 [2024-05-15 18:13:11.695322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.695369] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:19.240 [2024-05-15 18:13:11.695387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.695399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:19.240 [2024-05-15 18:13:11.695416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:19.240 [2024-05-15 18:13:11.695437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.727535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.727583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:19.240 [2024-05-15 18:13:11.727603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.067 ms 00:21:19.240 [2024-05-15 18:13:11.727615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.727700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.240 [2024-05-15 18:13:11.727726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:19.240 [2024-05-15 18:13:11.727740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:19.240 [2024-05-15 18:13:11.727751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.240 [2024-05-15 18:13:11.729159] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 358.012 ms, result 0 00:21:59.505  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (26 MBps) Copying: 102/1024 [MB] (26 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 153/1024 [MB] (25 MBps) Copying: 179/1024 [MB] (26 MBps) Copying: 205/1024 [MB] (25 MBps) Copying: 230/1024 [MB] (24 MBps) Copying: 255/1024 [MB] (25 MBps) Copying: 280/1024 [MB] (24 MBps) Copying: 305/1024 [MB] (25 MBps) Copying: 330/1024 [MB] (24 MBps) Copying: 355/1024 [MB] (25 MBps) Copying: 381/1024 [MB] (25 MBps) Copying: 407/1024 [MB] (25 MBps) Copying: 432/1024 [MB] (25 MBps) Copying: 459/1024 [MB] (26 MBps) Copying: 483/1024 [MB] (24 MBps) Copying: 508/1024 [MB] (25 MBps) Copying: 533/1024 [MB] (24 MBps) Copying: 558/1024 [MB] (25 MBps) Copying: 583/1024 [MB] (25 MBps) Copying: 609/1024 [MB] (25 MBps) Copying: 636/1024 [MB] (27 MBps) Copying: 664/1024 [MB] (27 MBps) Copying: 691/1024 [MB] (26 MBps) Copying: 716/1024 [MB] (25 MBps) Copying: 743/1024 [MB] (26 MBps) Copying: 768/1024 [MB] (25 MBps) Copying: 793/1024 [MB] (24 MBps) Copying: 818/1024 [MB] (25 MBps) Copying: 844/1024 [MB] (25 MBps) Copying: 870/1024 [MB] (26 MBps) Copying: 896/1024 [MB] (25 MBps) Copying: 922/1024 [MB] (25 MBps) Copying: 948/1024 [MB] (25 MBps) Copying: 971/1024 [MB] (23 MBps) Copying: 997/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-05-15 18:13:51.781945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.782017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:59.505 [2024-05-15 18:13:51.782041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:59.505 [2024-05-15 18:13:51.782054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.782085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.505 [2024-05-15 18:13:51.785783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.785832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:59.505 [2024-05-15 18:13:51.785863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:21:59.505 [2024-05-15 18:13:51.785875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.787607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.787651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:59.505 [2024-05-15 18:13:51.787668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:21:59.505 [2024-05-15 18:13:51.787681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.804100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.804150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:59.505 [2024-05-15 18:13:51.804169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.395 ms 00:21:59.505 [2024-05-15 18:13:51.804181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.810788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.810832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:59.505 [2024-05-15 18:13:51.810847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.564 ms 00:21:59.505 [2024-05-15 18:13:51.810858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.841480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.841563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:59.505 [2024-05-15 18:13:51.841584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.557 ms 00:21:59.505 [2024-05-15 18:13:51.841595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.859751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.859819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:59.505 [2024-05-15 18:13:51.859848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.080 ms 00:21:59.505 [2024-05-15 18:13:51.859861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.860014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.860033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:59.505 [2024-05-15 18:13:51.860047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:21:59.505 [2024-05-15 18:13:51.860065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.889700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.889776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:59.505 [2024-05-15 18:13:51.889793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.611 ms 00:21:59.505 [2024-05-15 18:13:51.889805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.919969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.920020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:59.505 [2024-05-15 18:13:51.920039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.119 ms 00:21:59.505 [2024-05-15 18:13:51.920068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.950612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.950708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:59.505 [2024-05-15 18:13:51.950728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.497 ms 00:21:59.505 [2024-05-15 18:13:51.950740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.979994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.505 [2024-05-15 18:13:51.980062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:59.505 [2024-05-15 18:13:51.980083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.159 ms 00:21:59.505 [2024-05-15 18:13:51.980096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.505 [2024-05-15 18:13:51.980157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:59.505 [2024-05-15 18:13:51.980183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:59.505 [2024-05-15 18:13:51.980651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.980992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:59.506 [2024-05-15 18:13:51.981493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:59.506 [2024-05-15 18:13:51.981505] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7bf88be1-4a70-4e52-92b3-ca484d9799c3 00:21:59.506 [2024-05-15 18:13:51.981518] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:59.506 [2024-05-15 18:13:51.981530] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:59.506 [2024-05-15 18:13:51.981551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:59.506 [2024-05-15 18:13:51.981564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:59.506 [2024-05-15 18:13:51.981575] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:59.506 [2024-05-15 18:13:51.981587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:59.506 [2024-05-15 18:13:51.981598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:59.506 [2024-05-15 18:13:51.981609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:59.506 [2024-05-15 18:13:51.981635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:59.506 [2024-05-15 18:13:51.981647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.506 [2024-05-15 18:13:51.981659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:59.506 [2024-05-15 18:13:51.981672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:21:59.506 [2024-05-15 18:13:51.981684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.506 [2024-05-15 18:13:51.998386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.506 [2024-05-15 18:13:51.998463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:59.506 [2024-05-15 18:13:51.998482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.646 ms 00:21:59.506 [2024-05-15 18:13:51.998494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.506 [2024-05-15 18:13:51.998774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.506 [2024-05-15 18:13:51.998791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:59.506 [2024-05-15 18:13:51.998804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:21:59.506 [2024-05-15 18:13:51.998816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.775 [2024-05-15 18:13:52.045513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.775 [2024-05-15 18:13:52.045604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.775 [2024-05-15 18:13:52.045625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.775 [2024-05-15 18:13:52.045638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.775 [2024-05-15 18:13:52.045724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.775 [2024-05-15 18:13:52.045740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.775 [2024-05-15 18:13:52.045752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.775 [2024-05-15 18:13:52.045763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.775 [2024-05-15 18:13:52.045880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.775 [2024-05-15 18:13:52.045900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.775 [2024-05-15 18:13:52.045912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.775 [2024-05-15 18:13:52.045924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.775 [2024-05-15 18:13:52.045947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.775 [2024-05-15 18:13:52.045962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.775 [2024-05-15 18:13:52.045974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.775 [2024-05-15 18:13:52.045986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.775 [2024-05-15 18:13:52.150763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.775 [2024-05-15 18:13:52.150838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.775 [2024-05-15 18:13:52.150857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.775 [2024-05-15 18:13:52.150870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.775 [2024-05-15 18:13:52.191624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.775 [2024-05-15 18:13:52.191714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.775 [2024-05-15 18:13:52.191734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.775 [2024-05-15 18:13:52.191747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.775 [2024-05-15 18:13:52.191826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.776 [2024-05-15 18:13:52.191857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.776 [2024-05-15 18:13:52.191880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.776 [2024-05-15 18:13:52.191894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.776 [2024-05-15 18:13:52.191941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.776 [2024-05-15 18:13:52.191957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.776 [2024-05-15 18:13:52.191969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.776 [2024-05-15 18:13:52.191981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.776 [2024-05-15 18:13:52.192236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.776 [2024-05-15 18:13:52.192276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.776 [2024-05-15 18:13:52.192322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.776 [2024-05-15 18:13:52.192336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.776 [2024-05-15 18:13:52.192394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.776 [2024-05-15 18:13:52.192413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:59.776 [2024-05-15 18:13:52.192426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.776 [2024-05-15 18:13:52.192438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.776 [2024-05-15 18:13:52.192483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.776 [2024-05-15 18:13:52.192498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.776 [2024-05-15 18:13:52.192511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.776 [2024-05-15 18:13:52.192530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.776 [2024-05-15 18:13:52.192583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.776 [2024-05-15 18:13:52.192600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.776 [2024-05-15 18:13:52.192612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.776 [2024-05-15 18:13:52.192624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.776 [2024-05-15 18:13:52.192769] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 410.785 ms, result 0 00:22:01.676 00:22:01.676 00:22:01.676 18:13:53 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:01.676 [2024-05-15 18:13:54.076419] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:22:01.676 [2024-05-15 18:13:54.076595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79887 ] 00:22:01.934 [2024-05-15 18:13:54.246619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.191 [2024-05-15 18:13:54.482286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.449 [2024-05-15 18:13:54.821582] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.449 [2024-05-15 18:13:54.821693] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.725 [2024-05-15 18:13:54.978108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:54.978200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.725 [2024-05-15 18:13:54.978221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:02.725 [2024-05-15 18:13:54.978239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:54.978329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:54.978352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.725 [2024-05-15 18:13:54.978365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:02.725 [2024-05-15 18:13:54.978377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:54.978409] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.725 [2024-05-15 18:13:54.979288] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.725 [2024-05-15 18:13:54.979363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:54.979379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.725 [2024-05-15 18:13:54.979393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:22:02.725 [2024-05-15 18:13:54.979405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:54.981364] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.725 [2024-05-15 18:13:54.998116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:54.998172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.725 [2024-05-15 18:13:54.998205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.754 ms 00:22:02.725 [2024-05-15 18:13:54.998216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:54.998283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:54.998321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.725 [2024-05-15 18:13:54.998335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:02.725 [2024-05-15 18:13:54.998346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.007455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:55.007512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.725 [2024-05-15 18:13:55.007543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.006 ms 00:22:02.725 [2024-05-15 18:13:55.007556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.007668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:55.007688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.725 [2024-05-15 18:13:55.007701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:02.725 [2024-05-15 18:13:55.007713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.007772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:55.007789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.725 [2024-05-15 18:13:55.007802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:02.725 [2024-05-15 18:13:55.007814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.007849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.725 [2024-05-15 18:13:55.012848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:55.012884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.725 [2024-05-15 18:13:55.012900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.008 ms 00:22:02.725 [2024-05-15 18:13:55.012912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.012950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:55.012965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.725 [2024-05-15 18:13:55.012978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:02.725 [2024-05-15 18:13:55.012990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.013074] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.725 [2024-05-15 18:13:55.013110] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:02.725 [2024-05-15 18:13:55.013168] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.725 [2024-05-15 18:13:55.013189] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:02.725 [2024-05-15 18:13:55.013270] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:02.725 [2024-05-15 18:13:55.013287] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.725 [2024-05-15 18:13:55.013302] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:02.725 [2024-05-15 18:13:55.013340] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.725 [2024-05-15 18:13:55.013357] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.725 [2024-05-15 18:13:55.013370] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.725 [2024-05-15 18:13:55.013382] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.725 [2024-05-15 18:13:55.013394] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:02.725 [2024-05-15 18:13:55.013405] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:02.725 [2024-05-15 18:13:55.013418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:55.013430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.725 [2024-05-15 18:13:55.013442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:22:02.725 [2024-05-15 18:13:55.013454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.013532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.725 [2024-05-15 18:13:55.013550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.725 [2024-05-15 18:13:55.013563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:02.725 [2024-05-15 18:13:55.013574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.725 [2024-05-15 18:13:55.013665] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.725 [2024-05-15 18:13:55.013691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.725 [2024-05-15 18:13:55.013711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.725 [2024-05-15 18:13:55.013724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.725 [2024-05-15 18:13:55.013736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.725 [2024-05-15 18:13:55.013747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.725 [2024-05-15 18:13:55.013759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.725 [2024-05-15 18:13:55.013770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.726 [2024-05-15 18:13:55.013780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.726 [2024-05-15 18:13:55.013791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.726 [2024-05-15 18:13:55.013802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.726 [2024-05-15 18:13:55.013813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.726 [2024-05-15 18:13:55.013824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.726 [2024-05-15 18:13:55.013836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.726 [2024-05-15 18:13:55.013847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:02.726 [2024-05-15 18:13:55.013871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.726 [2024-05-15 18:13:55.013883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.726 [2024-05-15 18:13:55.013894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:02.726 [2024-05-15 18:13:55.013905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.726 [2024-05-15 18:13:55.013916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:02.726 [2024-05-15 18:13:55.013927] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:02.726 [2024-05-15 18:13:55.013939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:02.726 [2024-05-15 18:13:55.013950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.726 [2024-05-15 18:13:55.013961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.726 [2024-05-15 18:13:55.013972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.726 [2024-05-15 18:13:55.013982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.726 [2024-05-15 18:13:55.013993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:02.726 [2024-05-15 18:13:55.014004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.726 [2024-05-15 18:13:55.014015] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.726 [2024-05-15 18:13:55.014026] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.726 [2024-05-15 18:13:55.014037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.726 [2024-05-15 18:13:55.014048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.726 [2024-05-15 18:13:55.014059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:02.726 [2024-05-15 18:13:55.014069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:02.726 [2024-05-15 18:13:55.014080] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.726 [2024-05-15 18:13:55.014092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.726 [2024-05-15 18:13:55.014102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.726 [2024-05-15 18:13:55.014113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.726 [2024-05-15 18:13:55.014125] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:02.726 [2024-05-15 18:13:55.014135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.726 [2024-05-15 18:13:55.014146] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.726 [2024-05-15 18:13:55.014158] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.726 [2024-05-15 18:13:55.014181] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.726 [2024-05-15 18:13:55.014193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.726 [2024-05-15 18:13:55.014205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.726 [2024-05-15 18:13:55.014217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.726 [2024-05-15 18:13:55.014228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.726 [2024-05-15 18:13:55.014240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.726 [2024-05-15 18:13:55.014251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.726 [2024-05-15 18:13:55.014262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.726 [2024-05-15 18:13:55.014275] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.726 [2024-05-15 18:13:55.014289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.726 [2024-05-15 18:13:55.014325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.726 [2024-05-15 18:13:55.014338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:02.726 [2024-05-15 18:13:55.014350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:02.726 [2024-05-15 18:13:55.014362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:02.726 [2024-05-15 18:13:55.014373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:02.726 [2024-05-15 18:13:55.014385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:02.726 [2024-05-15 18:13:55.014397] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:02.726 [2024-05-15 18:13:55.014409] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:02.726 [2024-05-15 18:13:55.014421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:02.726 [2024-05-15 18:13:55.014433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:02.726 [2024-05-15 18:13:55.014445] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:02.726 [2024-05-15 18:13:55.014457] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:02.726 [2024-05-15 18:13:55.014469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:02.726 [2024-05-15 18:13:55.014480] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.726 [2024-05-15 18:13:55.014493] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.726 [2024-05-15 18:13:55.014506] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.726 [2024-05-15 18:13:55.014518] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.726 [2024-05-15 18:13:55.014531] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.726 [2024-05-15 18:13:55.014543] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.726 [2024-05-15 18:13:55.014556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.014568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.726 [2024-05-15 18:13:55.014580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:22:02.726 [2024-05-15 18:13:55.014592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.036503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.036580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.726 [2024-05-15 18:13:55.036600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.849 ms 00:22:02.726 [2024-05-15 18:13:55.036613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.036739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.036755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.726 [2024-05-15 18:13:55.036768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:02.726 [2024-05-15 18:13:55.036780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.089099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.089186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.726 [2024-05-15 18:13:55.089228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.235 ms 00:22:02.726 [2024-05-15 18:13:55.089241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.089332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.089350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.726 [2024-05-15 18:13:55.089364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.726 [2024-05-15 18:13:55.089375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.090015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.090049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.726 [2024-05-15 18:13:55.090065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:22:02.726 [2024-05-15 18:13:55.090084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.090244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.090264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.726 [2024-05-15 18:13:55.090278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:22:02.726 [2024-05-15 18:13:55.090289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.114673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.726 [2024-05-15 18:13:55.114741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.726 [2024-05-15 18:13:55.114763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.327 ms 00:22:02.726 [2024-05-15 18:13:55.114776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.726 [2024-05-15 18:13:55.133343] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:02.727 [2024-05-15 18:13:55.133428] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.727 [2024-05-15 18:13:55.133450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.727 [2024-05-15 18:13:55.133463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.727 [2024-05-15 18:13:55.133480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.490 ms 00:22:02.727 [2024-05-15 18:13:55.133492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.727 [2024-05-15 18:13:55.163276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.727 [2024-05-15 18:13:55.163356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.727 [2024-05-15 18:13:55.163392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.706 ms 00:22:02.727 [2024-05-15 18:13:55.163405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.727 [2024-05-15 18:13:55.180187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.727 [2024-05-15 18:13:55.180251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.727 [2024-05-15 18:13:55.180270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.636 ms 00:22:02.727 [2024-05-15 18:13:55.180282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.727 [2024-05-15 18:13:55.195063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.727 [2024-05-15 18:13:55.195099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.727 [2024-05-15 18:13:55.195131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.691 ms 00:22:02.727 [2024-05-15 18:13:55.195142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.727 [2024-05-15 18:13:55.195718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.727 [2024-05-15 18:13:55.195753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.727 [2024-05-15 18:13:55.195770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:22:02.727 [2024-05-15 18:13:55.195782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.274397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.274460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.993 [2024-05-15 18:13:55.274496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.582 ms 00:22:02.993 [2024-05-15 18:13:55.274509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.288111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:02.993 [2024-05-15 18:13:55.292370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.292423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.993 [2024-05-15 18:13:55.292440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.727 ms 00:22:02.993 [2024-05-15 18:13:55.292458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.292603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.292623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.993 [2024-05-15 18:13:55.292637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:02.993 [2024-05-15 18:13:55.292649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.292739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.292764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.993 [2024-05-15 18:13:55.292778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:02.993 [2024-05-15 18:13:55.292790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.294848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.294886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:02.993 [2024-05-15 18:13:55.294901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:22:02.993 [2024-05-15 18:13:55.294913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.294952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.294968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.993 [2024-05-15 18:13:55.294980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.993 [2024-05-15 18:13:55.294992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.295035] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.993 [2024-05-15 18:13:55.295052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.295068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.993 [2024-05-15 18:13:55.295080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:02.993 [2024-05-15 18:13:55.295093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.326666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.326731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.993 [2024-05-15 18:13:55.326752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.545 ms 00:22:02.993 [2024-05-15 18:13:55.326765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.326914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.993 [2024-05-15 18:13:55.326936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.993 [2024-05-15 18:13:55.326951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:02.993 [2024-05-15 18:13:55.326963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.993 [2024-05-15 18:13:55.328376] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.716 ms, result 0 00:22:43.649  Copying: 24/1024 [MB] (24 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 74/1024 [MB] (24 MBps) Copying: 99/1024 [MB] (24 MBps) Copying: 125/1024 [MB] (25 MBps) Copying: 150/1024 [MB] (25 MBps) Copying: 177/1024 [MB] (26 MBps) Copying: 203/1024 [MB] (26 MBps) Copying: 230/1024 [MB] (26 MBps) Copying: 256/1024 [MB] (26 MBps) Copying: 282/1024 [MB] (25 MBps) Copying: 308/1024 [MB] (26 MBps) Copying: 334/1024 [MB] (25 MBps) Copying: 359/1024 [MB] (25 MBps) Copying: 383/1024 [MB] (23 MBps) Copying: 408/1024 [MB] (25 MBps) Copying: 433/1024 [MB] (25 MBps) Copying: 459/1024 [MB] (25 MBps) Copying: 484/1024 [MB] (25 MBps) Copying: 505/1024 [MB] (21 MBps) Copying: 528/1024 [MB] (22 MBps) Copying: 553/1024 [MB] (24 MBps) Copying: 578/1024 [MB] (25 MBps) Copying: 604/1024 [MB] (26 MBps) Copying: 630/1024 [MB] (25 MBps) Copying: 656/1024 [MB] (25 MBps) Copying: 680/1024 [MB] (23 MBps) Copying: 707/1024 [MB] (26 MBps) Copying: 733/1024 [MB] (25 MBps) Copying: 760/1024 [MB] (27 MBps) Copying: 786/1024 [MB] (26 MBps) Copying: 811/1024 [MB] (24 MBps) Copying: 837/1024 [MB] (25 MBps) Copying: 863/1024 [MB] (25 MBps) Copying: 889/1024 [MB] (25 MBps) Copying: 914/1024 [MB] (25 MBps) Copying: 939/1024 [MB] (25 MBps) Copying: 964/1024 [MB] (25 MBps) Copying: 990/1024 [MB] (25 MBps) Copying: 1015/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-05-15 18:14:36.124570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.649 [2024-05-15 18:14:36.124663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:43.649 [2024-05-15 18:14:36.124688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:43.649 [2024-05-15 18:14:36.124701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.649 [2024-05-15 18:14:36.124750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:43.649 [2024-05-15 18:14:36.128510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.649 [2024-05-15 18:14:36.128549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:43.649 [2024-05-15 18:14:36.128567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.735 ms 00:22:43.649 [2024-05-15 18:14:36.128587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.649 [2024-05-15 18:14:36.128850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.649 [2024-05-15 18:14:36.128878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:43.649 [2024-05-15 18:14:36.128893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:22:43.649 [2024-05-15 18:14:36.128905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.649 [2024-05-15 18:14:36.132665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.649 [2024-05-15 18:14:36.132698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:43.649 [2024-05-15 18:14:36.132713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:22:43.649 [2024-05-15 18:14:36.132724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.649 [2024-05-15 18:14:36.139463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.649 [2024-05-15 18:14:36.139504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:43.649 [2024-05-15 18:14:36.139520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.712 ms 00:22:43.649 [2024-05-15 18:14:36.139531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.171542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.908 [2024-05-15 18:14:36.171598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:43.908 [2024-05-15 18:14:36.171617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.938 ms 00:22:43.908 [2024-05-15 18:14:36.171630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.189315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.908 [2024-05-15 18:14:36.189394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:43.908 [2024-05-15 18:14:36.189413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.626 ms 00:22:43.908 [2024-05-15 18:14:36.189426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.189592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.908 [2024-05-15 18:14:36.189614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:43.908 [2024-05-15 18:14:36.189635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:22:43.908 [2024-05-15 18:14:36.189648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.221728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.908 [2024-05-15 18:14:36.221787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:43.908 [2024-05-15 18:14:36.221807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.052 ms 00:22:43.908 [2024-05-15 18:14:36.221819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.252060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.908 [2024-05-15 18:14:36.252138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:43.908 [2024-05-15 18:14:36.252174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.188 ms 00:22:43.908 [2024-05-15 18:14:36.252188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.282349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.908 [2024-05-15 18:14:36.282393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:43.908 [2024-05-15 18:14:36.282412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.111 ms 00:22:43.908 [2024-05-15 18:14:36.282425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.312524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.908 [2024-05-15 18:14:36.312581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:43.908 [2024-05-15 18:14:36.312598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.999 ms 00:22:43.908 [2024-05-15 18:14:36.312611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.908 [2024-05-15 18:14:36.312658] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:43.909 [2024-05-15 18:14:36.312682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.312993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:43.909 [2024-05-15 18:14:36.313853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:43.910 [2024-05-15 18:14:36.313985] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:43.910 [2024-05-15 18:14:36.313998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7bf88be1-4a70-4e52-92b3-ca484d9799c3 00:22:43.910 [2024-05-15 18:14:36.314019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:43.910 [2024-05-15 18:14:36.314032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:43.910 [2024-05-15 18:14:36.314044] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:43.910 [2024-05-15 18:14:36.314056] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:43.910 [2024-05-15 18:14:36.314067] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:43.910 [2024-05-15 18:14:36.314080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:43.910 [2024-05-15 18:14:36.314092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:43.910 [2024-05-15 18:14:36.314117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:43.910 [2024-05-15 18:14:36.314129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:43.910 [2024-05-15 18:14:36.314141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.910 [2024-05-15 18:14:36.314153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:43.910 [2024-05-15 18:14:36.314167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.485 ms 00:22:43.910 [2024-05-15 18:14:36.314179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.910 [2024-05-15 18:14:36.331379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.910 [2024-05-15 18:14:36.331426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:43.910 [2024-05-15 18:14:36.331443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.154 ms 00:22:43.910 [2024-05-15 18:14:36.331455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.910 [2024-05-15 18:14:36.331712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.910 [2024-05-15 18:14:36.331738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:43.910 [2024-05-15 18:14:36.331753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:22:43.910 [2024-05-15 18:14:36.331765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.910 [2024-05-15 18:14:36.381411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.910 [2024-05-15 18:14:36.381471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:43.910 [2024-05-15 18:14:36.381489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.910 [2024-05-15 18:14:36.381503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.910 [2024-05-15 18:14:36.381583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.910 [2024-05-15 18:14:36.381599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:43.910 [2024-05-15 18:14:36.381612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.910 [2024-05-15 18:14:36.381624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.910 [2024-05-15 18:14:36.381721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.910 [2024-05-15 18:14:36.381741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:43.910 [2024-05-15 18:14:36.381754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.910 [2024-05-15 18:14:36.381766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.910 [2024-05-15 18:14:36.381789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.910 [2024-05-15 18:14:36.381803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:43.910 [2024-05-15 18:14:36.381816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.910 [2024-05-15 18:14:36.381828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.168 [2024-05-15 18:14:36.485925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.168 [2024-05-15 18:14:36.485991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.168 [2024-05-15 18:14:36.486011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.168 [2024-05-15 18:14:36.486025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.168 [2024-05-15 18:14:36.527383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.168 [2024-05-15 18:14:36.527440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.168 [2024-05-15 18:14:36.527460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.168 [2024-05-15 18:14:36.527473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.168 [2024-05-15 18:14:36.527563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.168 [2024-05-15 18:14:36.527580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.168 [2024-05-15 18:14:36.527594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.168 [2024-05-15 18:14:36.527606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.168 [2024-05-15 18:14:36.527654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.168 [2024-05-15 18:14:36.527675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.168 [2024-05-15 18:14:36.527688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.168 [2024-05-15 18:14:36.527700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.168 [2024-05-15 18:14:36.527823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.169 [2024-05-15 18:14:36.527854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.169 [2024-05-15 18:14:36.527868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.169 [2024-05-15 18:14:36.527880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.169 [2024-05-15 18:14:36.527947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.169 [2024-05-15 18:14:36.527966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:44.169 [2024-05-15 18:14:36.527979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.169 [2024-05-15 18:14:36.527991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.169 [2024-05-15 18:14:36.528037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.169 [2024-05-15 18:14:36.528052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.169 [2024-05-15 18:14:36.528079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.169 [2024-05-15 18:14:36.528092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.169 [2024-05-15 18:14:36.528145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.169 [2024-05-15 18:14:36.528167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.169 [2024-05-15 18:14:36.528181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.169 [2024-05-15 18:14:36.528192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.169 [2024-05-15 18:14:36.528370] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.749 ms, result 0 00:22:45.543 00:22:45.543 00:22:45.543 18:14:37 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:47.447 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:47.447 18:14:39 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:47.447 [2024-05-15 18:14:39.884336] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:22:47.447 [2024-05-15 18:14:39.884481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80347 ] 00:22:47.706 [2024-05-15 18:14:40.051022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.965 [2024-05-15 18:14:40.308164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.224 [2024-05-15 18:14:40.653715] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.224 [2024-05-15 18:14:40.653793] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.484 [2024-05-15 18:14:40.810587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.810652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:48.484 [2024-05-15 18:14:40.810673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:48.484 [2024-05-15 18:14:40.810691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.810761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.810782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.484 [2024-05-15 18:14:40.810794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:48.484 [2024-05-15 18:14:40.810805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.810835] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:48.484 [2024-05-15 18:14:40.811728] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:48.484 [2024-05-15 18:14:40.811769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.811783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.484 [2024-05-15 18:14:40.811796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:22:48.484 [2024-05-15 18:14:40.811806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.813700] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:48.484 [2024-05-15 18:14:40.830394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.830459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:48.484 [2024-05-15 18:14:40.830478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.695 ms 00:22:48.484 [2024-05-15 18:14:40.830490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.830569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.830593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:48.484 [2024-05-15 18:14:40.830606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:48.484 [2024-05-15 18:14:40.830617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.839188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.839241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.484 [2024-05-15 18:14:40.839258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.479 ms 00:22:48.484 [2024-05-15 18:14:40.839269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.839384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.839405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.484 [2024-05-15 18:14:40.839418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:48.484 [2024-05-15 18:14:40.839429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.839493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.839509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:48.484 [2024-05-15 18:14:40.839523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:48.484 [2024-05-15 18:14:40.839533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.839569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:48.484 [2024-05-15 18:14:40.844512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.844549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.484 [2024-05-15 18:14:40.844564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.953 ms 00:22:48.484 [2024-05-15 18:14:40.844576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.844613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.844627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:48.484 [2024-05-15 18:14:40.844639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:48.484 [2024-05-15 18:14:40.844650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.844721] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:48.484 [2024-05-15 18:14:40.844753] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:48.484 [2024-05-15 18:14:40.844793] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:48.484 [2024-05-15 18:14:40.844812] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:48.484 [2024-05-15 18:14:40.844891] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:48.484 [2024-05-15 18:14:40.844912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:48.484 [2024-05-15 18:14:40.844936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:48.484 [2024-05-15 18:14:40.844956] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:48.484 [2024-05-15 18:14:40.844971] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:48.484 [2024-05-15 18:14:40.844990] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:48.484 [2024-05-15 18:14:40.845007] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:48.484 [2024-05-15 18:14:40.845032] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:48.484 [2024-05-15 18:14:40.845049] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:48.484 [2024-05-15 18:14:40.845062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.845073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:48.484 [2024-05-15 18:14:40.845085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:22:48.484 [2024-05-15 18:14:40.845096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.845191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.484 [2024-05-15 18:14:40.845220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:48.484 [2024-05-15 18:14:40.845233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:48.484 [2024-05-15 18:14:40.845244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.484 [2024-05-15 18:14:40.845351] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:48.484 [2024-05-15 18:14:40.845380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:48.484 [2024-05-15 18:14:40.845400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:48.485 [2024-05-15 18:14:40.845441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:48.485 [2024-05-15 18:14:40.845489] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:48.485 [2024-05-15 18:14:40.845509] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:48.485 [2024-05-15 18:14:40.845519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:48.485 [2024-05-15 18:14:40.845529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:48.485 [2024-05-15 18:14:40.845539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:48.485 [2024-05-15 18:14:40.845550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:48.485 [2024-05-15 18:14:40.845575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:48.485 [2024-05-15 18:14:40.845618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:48.485 [2024-05-15 18:14:40.845630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:48.485 [2024-05-15 18:14:40.845651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:48.485 [2024-05-15 18:14:40.845666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:48.485 [2024-05-15 18:14:40.845703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:48.485 [2024-05-15 18:14:40.845736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:48.485 [2024-05-15 18:14:40.845769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:48.485 [2024-05-15 18:14:40.845799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:48.485 [2024-05-15 18:14:40.845829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:48.485 [2024-05-15 18:14:40.845862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:48.485 [2024-05-15 18:14:40.845880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:48.485 [2024-05-15 18:14:40.845897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:48.485 [2024-05-15 18:14:40.845908] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:48.485 [2024-05-15 18:14:40.845919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:48.485 [2024-05-15 18:14:40.845936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:48.485 [2024-05-15 18:14:40.845947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.485 [2024-05-15 18:14:40.845958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:48.485 [2024-05-15 18:14:40.845969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:48.485 [2024-05-15 18:14:40.845980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:48.485 [2024-05-15 18:14:40.845991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:48.485 [2024-05-15 18:14:40.846002] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:48.485 [2024-05-15 18:14:40.846013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:48.485 [2024-05-15 18:14:40.846025] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:48.485 [2024-05-15 18:14:40.846038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.485 [2024-05-15 18:14:40.846050] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:48.485 [2024-05-15 18:14:40.846063] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:48.485 [2024-05-15 18:14:40.846078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:48.485 [2024-05-15 18:14:40.846097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:48.485 [2024-05-15 18:14:40.846117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:48.485 [2024-05-15 18:14:40.846131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:48.485 [2024-05-15 18:14:40.846143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:48.485 [2024-05-15 18:14:40.846154] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:48.485 [2024-05-15 18:14:40.846166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:48.485 [2024-05-15 18:14:40.846177] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:48.485 [2024-05-15 18:14:40.846188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:48.485 [2024-05-15 18:14:40.846200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:48.485 [2024-05-15 18:14:40.846212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:48.485 [2024-05-15 18:14:40.846223] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:48.485 [2024-05-15 18:14:40.846235] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.485 [2024-05-15 18:14:40.846247] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:48.485 [2024-05-15 18:14:40.846259] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:48.485 [2024-05-15 18:14:40.846271] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:48.485 [2024-05-15 18:14:40.846282] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:48.485 [2024-05-15 18:14:40.846313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.485 [2024-05-15 18:14:40.846327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:48.485 [2024-05-15 18:14:40.846338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:22:48.485 [2024-05-15 18:14:40.846350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.485 [2024-05-15 18:14:40.868574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.485 [2024-05-15 18:14:40.868628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.485 [2024-05-15 18:14:40.868648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.158 ms 00:22:48.485 [2024-05-15 18:14:40.868660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.485 [2024-05-15 18:14:40.868778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.486 [2024-05-15 18:14:40.868793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:48.486 [2024-05-15 18:14:40.868806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:48.486 [2024-05-15 18:14:40.868816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.486 [2024-05-15 18:14:40.921599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.486 [2024-05-15 18:14:40.921684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:48.486 [2024-05-15 18:14:40.921710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.702 ms 00:22:48.486 [2024-05-15 18:14:40.921722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.486 [2024-05-15 18:14:40.921808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.486 [2024-05-15 18:14:40.921825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:48.486 [2024-05-15 18:14:40.921839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:48.486 [2024-05-15 18:14:40.921850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.486 [2024-05-15 18:14:40.922476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.486 [2024-05-15 18:14:40.922505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:48.486 [2024-05-15 18:14:40.922520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:22:48.486 [2024-05-15 18:14:40.922537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.486 [2024-05-15 18:14:40.922699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.486 [2024-05-15 18:14:40.922717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:48.486 [2024-05-15 18:14:40.922730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:22:48.486 [2024-05-15 18:14:40.922741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.486 [2024-05-15 18:14:40.944325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.486 [2024-05-15 18:14:40.944377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:48.486 [2024-05-15 18:14:40.944396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.555 ms 00:22:48.486 [2024-05-15 18:14:40.944408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.486 [2024-05-15 18:14:40.960972] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:48.486 [2024-05-15 18:14:40.961020] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:48.486 [2024-05-15 18:14:40.961041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.486 [2024-05-15 18:14:40.961053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:48.486 [2024-05-15 18:14:40.961067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.484 ms 00:22:48.486 [2024-05-15 18:14:40.961079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:40.990857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:40.990919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:48.745 [2024-05-15 18:14:40.990939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.729 ms 00:22:48.745 [2024-05-15 18:14:40.990951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.006832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.006886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:48.745 [2024-05-15 18:14:41.006907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.810 ms 00:22:48.745 [2024-05-15 18:14:41.006919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.022336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.022384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:48.745 [2024-05-15 18:14:41.022403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.366 ms 00:22:48.745 [2024-05-15 18:14:41.022414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.022894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.022931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:48.745 [2024-05-15 18:14:41.022946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:22:48.745 [2024-05-15 18:14:41.022958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.100957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.101036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:48.745 [2024-05-15 18:14:41.101058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.974 ms 00:22:48.745 [2024-05-15 18:14:41.101070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.113431] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:48.745 [2024-05-15 18:14:41.117195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.117236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:48.745 [2024-05-15 18:14:41.117254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.051 ms 00:22:48.745 [2024-05-15 18:14:41.117276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.117400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.117420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:48.745 [2024-05-15 18:14:41.117433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:48.745 [2024-05-15 18:14:41.117445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.117534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.117552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:48.745 [2024-05-15 18:14:41.117565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:48.745 [2024-05-15 18:14:41.117576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.745 [2024-05-15 18:14:41.119707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.745 [2024-05-15 18:14:41.119746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:48.745 [2024-05-15 18:14:41.119760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:22:48.746 [2024-05-15 18:14:41.119772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.746 [2024-05-15 18:14:41.119807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.746 [2024-05-15 18:14:41.119823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:48.746 [2024-05-15 18:14:41.119835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:48.746 [2024-05-15 18:14:41.119845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.746 [2024-05-15 18:14:41.119898] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:48.746 [2024-05-15 18:14:41.119916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.746 [2024-05-15 18:14:41.119933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:48.746 [2024-05-15 18:14:41.119953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:48.746 [2024-05-15 18:14:41.119963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.746 [2024-05-15 18:14:41.150942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.746 [2024-05-15 18:14:41.151004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:48.746 [2024-05-15 18:14:41.151022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.954 ms 00:22:48.746 [2024-05-15 18:14:41.151035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.746 [2024-05-15 18:14:41.151141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.746 [2024-05-15 18:14:41.151160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:48.746 [2024-05-15 18:14:41.151173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:48.746 [2024-05-15 18:14:41.151184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.746 [2024-05-15 18:14:41.152522] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.408 ms, result 0 00:23:27.544  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (26 MBps) Copying: 107/1024 [MB] (26 MBps) Copying: 133/1024 [MB] (25 MBps) Copying: 159/1024 [MB] (25 MBps) Copying: 184/1024 [MB] (25 MBps) Copying: 210/1024 [MB] (25 MBps) Copying: 235/1024 [MB] (24 MBps) Copying: 261/1024 [MB] (26 MBps) Copying: 288/1024 [MB] (26 MBps) Copying: 315/1024 [MB] (27 MBps) Copying: 341/1024 [MB] (25 MBps) Copying: 367/1024 [MB] (26 MBps) Copying: 393/1024 [MB] (25 MBps) Copying: 418/1024 [MB] (25 MBps) Copying: 443/1024 [MB] (25 MBps) Copying: 470/1024 [MB] (26 MBps) Copying: 495/1024 [MB] (25 MBps) Copying: 520/1024 [MB] (25 MBps) Copying: 546/1024 [MB] (25 MBps) Copying: 572/1024 [MB] (25 MBps) Copying: 598/1024 [MB] (25 MBps) Copying: 629/1024 [MB] (31 MBps) Copying: 661/1024 [MB] (31 MBps) Copying: 692/1024 [MB] (31 MBps) Copying: 723/1024 [MB] (30 MBps) Copying: 752/1024 [MB] (29 MBps) Copying: 782/1024 [MB] (30 MBps) Copying: 812/1024 [MB] (29 MBps) Copying: 843/1024 [MB] (30 MBps) Copying: 873/1024 [MB] (30 MBps) Copying: 900/1024 [MB] (27 MBps) Copying: 926/1024 [MB] (25 MBps) Copying: 952/1024 [MB] (26 MBps) Copying: 979/1024 [MB] (26 MBps) Copying: 1005/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-15 18:15:19.937824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.544 [2024-05-15 18:15:19.938060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:27.544 [2024-05-15 18:15:19.938228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:27.544 [2024-05-15 18:15:19.938283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.544 [2024-05-15 18:15:19.939643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:27.544 [2024-05-15 18:15:19.944710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.544 [2024-05-15 18:15:19.944869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:27.544 [2024-05-15 18:15:19.944993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:23:27.544 [2024-05-15 18:15:19.945122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.544 [2024-05-15 18:15:19.958238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.544 [2024-05-15 18:15:19.958440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:27.544 [2024-05-15 18:15:19.958565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.992 ms 00:23:27.544 [2024-05-15 18:15:19.958680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.544 [2024-05-15 18:15:19.980798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.544 [2024-05-15 18:15:19.980979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:27.544 [2024-05-15 18:15:19.981007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.052 ms 00:23:27.544 [2024-05-15 18:15:19.981021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.544 [2024-05-15 18:15:19.987645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.544 [2024-05-15 18:15:19.987675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:27.544 [2024-05-15 18:15:19.987705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.580 ms 00:23:27.544 [2024-05-15 18:15:19.987716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.544 [2024-05-15 18:15:20.020169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.544 [2024-05-15 18:15:20.020226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:27.544 [2024-05-15 18:15:20.020246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.405 ms 00:23:27.544 [2024-05-15 18:15:20.020257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.544 [2024-05-15 18:15:20.038881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.544 [2024-05-15 18:15:20.038951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:27.544 [2024-05-15 18:15:20.038982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.561 ms 00:23:27.544 [2024-05-15 18:15:20.038994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.804 [2024-05-15 18:15:20.130202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.804 [2024-05-15 18:15:20.130303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:27.804 [2024-05-15 18:15:20.130327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.149 ms 00:23:27.804 [2024-05-15 18:15:20.130340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.804 [2024-05-15 18:15:20.162733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.804 [2024-05-15 18:15:20.162805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:27.804 [2024-05-15 18:15:20.162826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.364 ms 00:23:27.804 [2024-05-15 18:15:20.162838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.804 [2024-05-15 18:15:20.194993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.804 [2024-05-15 18:15:20.195059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:27.804 [2024-05-15 18:15:20.195098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.063 ms 00:23:27.804 [2024-05-15 18:15:20.195111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.804 [2024-05-15 18:15:20.224917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.804 [2024-05-15 18:15:20.224961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:27.804 [2024-05-15 18:15:20.224979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.752 ms 00:23:27.804 [2024-05-15 18:15:20.224991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.804 [2024-05-15 18:15:20.254837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.804 [2024-05-15 18:15:20.254890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:27.804 [2024-05-15 18:15:20.254908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.752 ms 00:23:27.804 [2024-05-15 18:15:20.254920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.804 [2024-05-15 18:15:20.254968] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:27.804 [2024-05-15 18:15:20.254992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 117504 / 261120 wr_cnt: 1 state: open 00:23:27.804 [2024-05-15 18:15:20.255006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:27.804 [2024-05-15 18:15:20.255838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.255999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:27.805 [2024-05-15 18:15:20.256251] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:27.805 [2024-05-15 18:15:20.256262] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7bf88be1-4a70-4e52-92b3-ca484d9799c3 00:23:27.805 [2024-05-15 18:15:20.256274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 117504 00:23:27.805 [2024-05-15 18:15:20.256284] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 118464 00:23:27.805 [2024-05-15 18:15:20.256306] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 117504 00:23:27.805 [2024-05-15 18:15:20.256319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:23:27.805 [2024-05-15 18:15:20.256330] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:27.805 [2024-05-15 18:15:20.256341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:27.805 [2024-05-15 18:15:20.256359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:27.805 [2024-05-15 18:15:20.256382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:27.805 [2024-05-15 18:15:20.256393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:27.805 [2024-05-15 18:15:20.256404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.805 [2024-05-15 18:15:20.256415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:27.805 [2024-05-15 18:15:20.256427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:23:27.805 [2024-05-15 18:15:20.256446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.805 [2024-05-15 18:15:20.273354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.805 [2024-05-15 18:15:20.273393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:27.805 [2024-05-15 18:15:20.273410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.850 ms 00:23:27.805 [2024-05-15 18:15:20.273422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.805 [2024-05-15 18:15:20.273687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.805 [2024-05-15 18:15:20.273708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:27.805 [2024-05-15 18:15:20.273722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:23:27.805 [2024-05-15 18:15:20.273733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.127 [2024-05-15 18:15:20.321144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.127 [2024-05-15 18:15:20.321215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.127 [2024-05-15 18:15:20.321242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.127 [2024-05-15 18:15:20.321255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.127 [2024-05-15 18:15:20.321389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.127 [2024-05-15 18:15:20.321413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.127 [2024-05-15 18:15:20.321426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.127 [2024-05-15 18:15:20.321438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.127 [2024-05-15 18:15:20.321562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.127 [2024-05-15 18:15:20.321593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.127 [2024-05-15 18:15:20.321616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.127 [2024-05-15 18:15:20.321636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.127 [2024-05-15 18:15:20.321679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.127 [2024-05-15 18:15:20.321693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.127 [2024-05-15 18:15:20.321705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.127 [2024-05-15 18:15:20.321717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.127 [2024-05-15 18:15:20.432813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.432871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.128 [2024-05-15 18:15:20.432891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.432911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.473281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.473374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.128 [2024-05-15 18:15:20.473395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.473408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.473484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.473501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.128 [2024-05-15 18:15:20.473514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.473526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.473582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.473598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.128 [2024-05-15 18:15:20.473610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.473621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.473754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.473775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.128 [2024-05-15 18:15:20.473788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.473798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.473855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.473880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:28.128 [2024-05-15 18:15:20.473892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.473904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.473949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.473963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.128 [2024-05-15 18:15:20.473975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.473986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.474041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.128 [2024-05-15 18:15:20.474058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.128 [2024-05-15 18:15:20.474069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.128 [2024-05-15 18:15:20.474081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.128 [2024-05-15 18:15:20.474225] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.255 ms, result 0 00:23:30.030 00:23:30.030 00:23:30.030 18:15:22 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:30.030 [2024-05-15 18:15:22.215152] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:23:30.030 [2024-05-15 18:15:22.215345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80767 ] 00:23:30.030 [2024-05-15 18:15:22.380830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.288 [2024-05-15 18:15:22.614147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.556 [2024-05-15 18:15:22.957279] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:30.556 [2024-05-15 18:15:22.957371] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:30.816 [2024-05-15 18:15:23.113828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.113893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:30.816 [2024-05-15 18:15:23.113914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:30.816 [2024-05-15 18:15:23.113932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.114004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.114026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.816 [2024-05-15 18:15:23.114040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:30.816 [2024-05-15 18:15:23.114052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.114083] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:30.816 [2024-05-15 18:15:23.114985] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:30.816 [2024-05-15 18:15:23.115019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.115033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.816 [2024-05-15 18:15:23.115046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:23:30.816 [2024-05-15 18:15:23.115058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.116965] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:30.816 [2024-05-15 18:15:23.133442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.133483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:30.816 [2024-05-15 18:15:23.133501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.480 ms 00:23:30.816 [2024-05-15 18:15:23.133514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.133584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.133604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:30.816 [2024-05-15 18:15:23.133617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:30.816 [2024-05-15 18:15:23.133629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.142166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.142209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.816 [2024-05-15 18:15:23.142225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.443 ms 00:23:30.816 [2024-05-15 18:15:23.142238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.142356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.142377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.816 [2024-05-15 18:15:23.142391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:30.816 [2024-05-15 18:15:23.142402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.142468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.142485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:30.816 [2024-05-15 18:15:23.142498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:30.816 [2024-05-15 18:15:23.142509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.142545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:30.816 [2024-05-15 18:15:23.147514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.147550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.816 [2024-05-15 18:15:23.147566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:23:30.816 [2024-05-15 18:15:23.147578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.147617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.147631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:30.816 [2024-05-15 18:15:23.147644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:30.816 [2024-05-15 18:15:23.147655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.147731] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:30.816 [2024-05-15 18:15:23.147764] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:30.816 [2024-05-15 18:15:23.147805] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:30.816 [2024-05-15 18:15:23.147825] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:30.816 [2024-05-15 18:15:23.147915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:30.816 [2024-05-15 18:15:23.147939] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:30.816 [2024-05-15 18:15:23.147954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:30.816 [2024-05-15 18:15:23.147975] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:30.816 [2024-05-15 18:15:23.147989] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:30.816 [2024-05-15 18:15:23.148002] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:30.816 [2024-05-15 18:15:23.148013] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:30.816 [2024-05-15 18:15:23.148025] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:30.816 [2024-05-15 18:15:23.148036] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:30.816 [2024-05-15 18:15:23.148048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.148059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:30.816 [2024-05-15 18:15:23.148071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:30.816 [2024-05-15 18:15:23.148083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.816 [2024-05-15 18:15:23.148160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.816 [2024-05-15 18:15:23.148179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:30.816 [2024-05-15 18:15:23.148191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:30.817 [2024-05-15 18:15:23.148202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.817 [2024-05-15 18:15:23.148315] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:30.817 [2024-05-15 18:15:23.148333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:30.817 [2024-05-15 18:15:23.148352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:30.817 [2024-05-15 18:15:23.148394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:30.817 [2024-05-15 18:15:23.148427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.817 [2024-05-15 18:15:23.148448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:30.817 [2024-05-15 18:15:23.148461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:30.817 [2024-05-15 18:15:23.148472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.817 [2024-05-15 18:15:23.148483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:30.817 [2024-05-15 18:15:23.148494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:30.817 [2024-05-15 18:15:23.148517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:30.817 [2024-05-15 18:15:23.148539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:30.817 [2024-05-15 18:15:23.148550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:30.817 [2024-05-15 18:15:23.148572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:30.817 [2024-05-15 18:15:23.148583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:30.817 [2024-05-15 18:15:23.148605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:30.817 [2024-05-15 18:15:23.148636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:30.817 [2024-05-15 18:15:23.148667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:30.817 [2024-05-15 18:15:23.148698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:30.817 [2024-05-15 18:15:23.148729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.817 [2024-05-15 18:15:23.148749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:30.817 [2024-05-15 18:15:23.148759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:30.817 [2024-05-15 18:15:23.148769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.817 [2024-05-15 18:15:23.148779] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:30.817 [2024-05-15 18:15:23.148791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:30.817 [2024-05-15 18:15:23.148807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.817 [2024-05-15 18:15:23.148831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:30.817 [2024-05-15 18:15:23.148842] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:30.817 [2024-05-15 18:15:23.148853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:30.817 [2024-05-15 18:15:23.148864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:30.817 [2024-05-15 18:15:23.148874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:30.817 [2024-05-15 18:15:23.148885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:30.817 [2024-05-15 18:15:23.148897] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:30.817 [2024-05-15 18:15:23.148911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.817 [2024-05-15 18:15:23.148925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:30.817 [2024-05-15 18:15:23.148937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:30.817 [2024-05-15 18:15:23.148949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:30.817 [2024-05-15 18:15:23.148960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:30.817 [2024-05-15 18:15:23.148972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:30.817 [2024-05-15 18:15:23.148984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:30.817 [2024-05-15 18:15:23.148995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:30.817 [2024-05-15 18:15:23.149007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:30.817 [2024-05-15 18:15:23.149018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:30.817 [2024-05-15 18:15:23.149030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:30.817 [2024-05-15 18:15:23.149041] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:30.817 [2024-05-15 18:15:23.149053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:30.817 [2024-05-15 18:15:23.149065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:30.817 [2024-05-15 18:15:23.149076] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:30.817 [2024-05-15 18:15:23.149088] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.817 [2024-05-15 18:15:23.149101] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:30.817 [2024-05-15 18:15:23.149113] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:30.817 [2024-05-15 18:15:23.149125] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:30.817 [2024-05-15 18:15:23.149136] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:30.817 [2024-05-15 18:15:23.149149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.817 [2024-05-15 18:15:23.149161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:30.817 [2024-05-15 18:15:23.149173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:23:30.817 [2024-05-15 18:15:23.149184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.817 [2024-05-15 18:15:23.171020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.817 [2024-05-15 18:15:23.171070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:30.817 [2024-05-15 18:15:23.171090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.774 ms 00:23:30.817 [2024-05-15 18:15:23.171102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.817 [2024-05-15 18:15:23.171225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.817 [2024-05-15 18:15:23.171241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:30.817 [2024-05-15 18:15:23.171254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:30.817 [2024-05-15 18:15:23.171266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.817 [2024-05-15 18:15:23.229506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.817 [2024-05-15 18:15:23.229562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.817 [2024-05-15 18:15:23.229588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.149 ms 00:23:30.817 [2024-05-15 18:15:23.229601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.817 [2024-05-15 18:15:23.229678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.817 [2024-05-15 18:15:23.229695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.817 [2024-05-15 18:15:23.229709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:30.817 [2024-05-15 18:15:23.229720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.818 [2024-05-15 18:15:23.230365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.818 [2024-05-15 18:15:23.230386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.818 [2024-05-15 18:15:23.230400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:23:30.818 [2024-05-15 18:15:23.230417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.818 [2024-05-15 18:15:23.230579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.818 [2024-05-15 18:15:23.230598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.818 [2024-05-15 18:15:23.230612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:30.818 [2024-05-15 18:15:23.230623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.818 [2024-05-15 18:15:23.250327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.818 [2024-05-15 18:15:23.250377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.818 [2024-05-15 18:15:23.250396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:23:30.818 [2024-05-15 18:15:23.250409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.818 [2024-05-15 18:15:23.267131] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:30.818 [2024-05-15 18:15:23.267174] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:30.818 [2024-05-15 18:15:23.267193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.818 [2024-05-15 18:15:23.267205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:30.818 [2024-05-15 18:15:23.267220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.624 ms 00:23:30.818 [2024-05-15 18:15:23.267231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.818 [2024-05-15 18:15:23.296303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.818 [2024-05-15 18:15:23.296353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:30.818 [2024-05-15 18:15:23.296372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.010 ms 00:23:30.818 [2024-05-15 18:15:23.296385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.818 [2024-05-15 18:15:23.312068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.818 [2024-05-15 18:15:23.312123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:30.818 [2024-05-15 18:15:23.312141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.612 ms 00:23:30.818 [2024-05-15 18:15:23.312153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.327189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.327242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:31.076 [2024-05-15 18:15:23.327260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.982 ms 00:23:31.076 [2024-05-15 18:15:23.327272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.327774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.327807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:31.076 [2024-05-15 18:15:23.327824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:23:31.076 [2024-05-15 18:15:23.327836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.405900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.405967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:31.076 [2024-05-15 18:15:23.405988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.038 ms 00:23:31.076 [2024-05-15 18:15:23.406000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.418427] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:31.076 [2024-05-15 18:15:23.422324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.422363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:31.076 [2024-05-15 18:15:23.422381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.254 ms 00:23:31.076 [2024-05-15 18:15:23.422399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.422514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.422544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:31.076 [2024-05-15 18:15:23.422558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:31.076 [2024-05-15 18:15:23.422569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.424201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.424237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:31.076 [2024-05-15 18:15:23.424253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:23:31.076 [2024-05-15 18:15:23.424264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.426351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.426386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:31.076 [2024-05-15 18:15:23.426401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:23:31.076 [2024-05-15 18:15:23.426412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.426449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.426464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:31.076 [2024-05-15 18:15:23.426477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:31.076 [2024-05-15 18:15:23.426489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.426533] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:31.076 [2024-05-15 18:15:23.426550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.426567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:31.076 [2024-05-15 18:15:23.426579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:31.076 [2024-05-15 18:15:23.426591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.457756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.457803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:31.076 [2024-05-15 18:15:23.457821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.122 ms 00:23:31.076 [2024-05-15 18:15:23.457834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.457927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.076 [2024-05-15 18:15:23.457947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:31.076 [2024-05-15 18:15:23.457961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:31.076 [2024-05-15 18:15:23.457972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.076 [2024-05-15 18:15:23.465908] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 350.230 ms, result 0 00:24:09.758  Copying: 24/1024 [MB] (24 MBps) Copying: 51/1024 [MB] (26 MBps) Copying: 77/1024 [MB] (25 MBps) Copying: 104/1024 [MB] (27 MBps) Copying: 132/1024 [MB] (28 MBps) Copying: 159/1024 [MB] (26 MBps) Copying: 186/1024 [MB] (26 MBps) Copying: 215/1024 [MB] (28 MBps) Copying: 240/1024 [MB] (25 MBps) Copying: 267/1024 [MB] (26 MBps) Copying: 293/1024 [MB] (25 MBps) Copying: 320/1024 [MB] (27 MBps) Copying: 348/1024 [MB] (28 MBps) Copying: 376/1024 [MB] (28 MBps) Copying: 404/1024 [MB] (27 MBps) Copying: 431/1024 [MB] (27 MBps) Copying: 458/1024 [MB] (27 MBps) Copying: 485/1024 [MB] (26 MBps) Copying: 512/1024 [MB] (27 MBps) Copying: 538/1024 [MB] (25 MBps) Copying: 564/1024 [MB] (25 MBps) Copying: 590/1024 [MB] (26 MBps) Copying: 617/1024 [MB] (27 MBps) Copying: 643/1024 [MB] (26 MBps) Copying: 671/1024 [MB] (27 MBps) Copying: 698/1024 [MB] (27 MBps) Copying: 724/1024 [MB] (25 MBps) Copying: 751/1024 [MB] (27 MBps) Copying: 777/1024 [MB] (25 MBps) Copying: 801/1024 [MB] (24 MBps) Copying: 829/1024 [MB] (27 MBps) Copying: 856/1024 [MB] (27 MBps) Copying: 884/1024 [MB] (27 MBps) Copying: 911/1024 [MB] (27 MBps) Copying: 938/1024 [MB] (27 MBps) Copying: 964/1024 [MB] (26 MBps) Copying: 992/1024 [MB] (27 MBps) Copying: 1017/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-05-15 18:16:02.208322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.758 [2024-05-15 18:16:02.208405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:09.758 [2024-05-15 18:16:02.208432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:09.758 [2024-05-15 18:16:02.208448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.758 [2024-05-15 18:16:02.208507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:09.758 [2024-05-15 18:16:02.213950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.758 [2024-05-15 18:16:02.214005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:09.758 [2024-05-15 18:16:02.214028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.412 ms 00:24:09.758 [2024-05-15 18:16:02.214045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.758 [2024-05-15 18:16:02.214451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.758 [2024-05-15 18:16:02.214498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:09.758 [2024-05-15 18:16:02.214518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:24:09.758 [2024-05-15 18:16:02.214544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.758 [2024-05-15 18:16:02.219878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.758 [2024-05-15 18:16:02.219937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:09.758 [2024-05-15 18:16:02.219955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.305 ms 00:24:09.758 [2024-05-15 18:16:02.219967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.758 [2024-05-15 18:16:02.226815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.758 [2024-05-15 18:16:02.226869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:09.759 [2024-05-15 18:16:02.226885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.798 ms 00:24:09.759 [2024-05-15 18:16:02.226897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.759 [2024-05-15 18:16:02.259386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.759 [2024-05-15 18:16:02.259440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:09.759 [2024-05-15 18:16:02.259458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.425 ms 00:24:09.759 [2024-05-15 18:16:02.259470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.018 [2024-05-15 18:16:02.277682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.018 [2024-05-15 18:16:02.277748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:10.018 [2024-05-15 18:16:02.277774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.164 ms 00:24:10.018 [2024-05-15 18:16:02.277786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.018 [2024-05-15 18:16:02.373709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.018 [2024-05-15 18:16:02.373819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:10.018 [2024-05-15 18:16:02.373844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.867 ms 00:24:10.018 [2024-05-15 18:16:02.373857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.018 [2024-05-15 18:16:02.405823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.018 [2024-05-15 18:16:02.405910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:10.018 [2024-05-15 18:16:02.405929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.943 ms 00:24:10.018 [2024-05-15 18:16:02.405941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.018 [2024-05-15 18:16:02.436584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.018 [2024-05-15 18:16:02.436653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:10.018 [2024-05-15 18:16:02.436673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.595 ms 00:24:10.018 [2024-05-15 18:16:02.436702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.018 [2024-05-15 18:16:02.467506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.018 [2024-05-15 18:16:02.467554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:10.018 [2024-05-15 18:16:02.467572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.753 ms 00:24:10.018 [2024-05-15 18:16:02.467584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.018 [2024-05-15 18:16:02.498622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.018 [2024-05-15 18:16:02.498684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:10.018 [2024-05-15 18:16:02.498702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.936 ms 00:24:10.018 [2024-05-15 18:16:02.498713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.018 [2024-05-15 18:16:02.498761] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:10.018 [2024-05-15 18:16:02.498784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:24:10.018 [2024-05-15 18:16:02.498806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.498994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:10.018 [2024-05-15 18:16:02.499260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.499999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.500013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.500025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.500037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.500049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:10.019 [2024-05-15 18:16:02.500070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:10.019 [2024-05-15 18:16:02.500082] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7bf88be1-4a70-4e52-92b3-ca484d9799c3 00:24:10.019 [2024-05-15 18:16:02.500094] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:24:10.019 [2024-05-15 18:16:02.500116] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 17088 00:24:10.019 [2024-05-15 18:16:02.500127] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 16128 00:24:10.019 [2024-05-15 18:16:02.500139] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0595 00:24:10.019 [2024-05-15 18:16:02.500150] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:10.019 [2024-05-15 18:16:02.500162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:10.019 [2024-05-15 18:16:02.500173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:10.019 [2024-05-15 18:16:02.500183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:10.019 [2024-05-15 18:16:02.500210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:10.019 [2024-05-15 18:16:02.500222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.019 [2024-05-15 18:16:02.500239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:10.019 [2024-05-15 18:16:02.500252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:24:10.019 [2024-05-15 18:16:02.500262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.019 [2024-05-15 18:16:02.517240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.019 [2024-05-15 18:16:02.517338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:10.019 [2024-05-15 18:16:02.517357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.920 ms 00:24:10.019 [2024-05-15 18:16:02.517369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.019 [2024-05-15 18:16:02.517629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.019 [2024-05-15 18:16:02.517654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:10.019 [2024-05-15 18:16:02.517668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:24:10.019 [2024-05-15 18:16:02.517680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.567685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.567771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.279 [2024-05-15 18:16:02.567792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.567818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.567927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.567950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.279 [2024-05-15 18:16:02.567963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.567975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.568070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.568089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.279 [2024-05-15 18:16:02.568102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.568113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.568143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.568158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.279 [2024-05-15 18:16:02.568169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.568181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.675326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.675401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.279 [2024-05-15 18:16:02.675422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.675441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.715717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.715773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.279 [2024-05-15 18:16:02.715792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.715805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.715897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.715927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.279 [2024-05-15 18:16:02.715941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.715952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.716006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.716021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.279 [2024-05-15 18:16:02.716033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.716044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.716164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.716183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.279 [2024-05-15 18:16:02.716196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.716207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.279 [2024-05-15 18:16:02.716280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.279 [2024-05-15 18:16:02.716322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:10.279 [2024-05-15 18:16:02.716336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.279 [2024-05-15 18:16:02.716348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.280 [2024-05-15 18:16:02.716395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.280 [2024-05-15 18:16:02.716411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.280 [2024-05-15 18:16:02.716430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.280 [2024-05-15 18:16:02.716442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.280 [2024-05-15 18:16:02.716500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.280 [2024-05-15 18:16:02.716522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.280 [2024-05-15 18:16:02.716534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.280 [2024-05-15 18:16:02.716545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.280 [2024-05-15 18:16:02.716686] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.370 ms, result 0 00:24:11.654 00:24:11.655 00:24:11.655 18:16:03 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:13.569 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:13.569 18:16:06 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:13.569 18:16:06 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:13.569 18:16:06 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:13.828 Process with pid 79204 is not found 00:24:13.828 Remove shared memory files 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79204 00:24:13.828 18:16:06 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 79204 ']' 00:24:13.828 18:16:06 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 79204 00:24:13.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (79204) - No such process 00:24:13.828 18:16:06 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 79204 is not found' 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:13.828 18:16:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:13.828 00:24:13.828 real 3m16.962s 00:24:13.828 user 3m3.339s 00:24:13.828 sys 0m15.911s 00:24:13.828 18:16:06 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:13.828 18:16:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:13.828 ************************************ 00:24:13.828 END TEST ftl_restore 00:24:13.828 ************************************ 00:24:13.828 18:16:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:13.828 18:16:06 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:13.828 18:16:06 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:13.828 18:16:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:13.828 ************************************ 00:24:13.828 START TEST ftl_dirty_shutdown 00:24:13.828 ************************************ 00:24:13.828 18:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:14.087 * Looking for test storage... 00:24:14.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81262 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81262 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 81262 ']' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.087 18:16:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:14.087 [2024-05-15 18:16:06.519214] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:14.087 [2024-05-15 18:16:06.519402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81262 ] 00:24:14.347 [2024-05-15 18:16:06.694098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.606 [2024-05-15 18:16:06.939338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:15.544 18:16:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:15.817 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:15.818 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:15.818 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:15.818 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:24:15.818 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:15.818 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:15.818 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:15.818 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:16.078 { 00:24:16.078 "name": "nvme0n1", 00:24:16.078 "aliases": [ 00:24:16.078 "fa6b0003-d6a5-4cc7-9354-c2d945883600" 00:24:16.078 ], 00:24:16.078 "product_name": "NVMe disk", 00:24:16.078 "block_size": 4096, 00:24:16.078 "num_blocks": 1310720, 00:24:16.078 "uuid": "fa6b0003-d6a5-4cc7-9354-c2d945883600", 00:24:16.078 "assigned_rate_limits": { 00:24:16.078 "rw_ios_per_sec": 0, 00:24:16.078 "rw_mbytes_per_sec": 0, 00:24:16.078 "r_mbytes_per_sec": 0, 00:24:16.078 "w_mbytes_per_sec": 0 00:24:16.078 }, 00:24:16.078 "claimed": true, 00:24:16.078 "claim_type": "read_many_write_one", 00:24:16.078 "zoned": false, 00:24:16.078 "supported_io_types": { 00:24:16.078 "read": true, 00:24:16.078 "write": true, 00:24:16.078 "unmap": true, 00:24:16.078 "write_zeroes": true, 00:24:16.078 "flush": true, 00:24:16.078 "reset": true, 00:24:16.078 "compare": true, 00:24:16.078 "compare_and_write": false, 00:24:16.078 "abort": true, 00:24:16.078 "nvme_admin": true, 00:24:16.078 "nvme_io": true 00:24:16.078 }, 00:24:16.078 "driver_specific": { 00:24:16.078 "nvme": [ 00:24:16.078 { 00:24:16.078 "pci_address": "0000:00:11.0", 00:24:16.078 "trid": { 00:24:16.078 "trtype": "PCIe", 00:24:16.078 "traddr": "0000:00:11.0" 00:24:16.078 }, 00:24:16.078 "ctrlr_data": { 00:24:16.078 "cntlid": 0, 00:24:16.078 "vendor_id": "0x1b36", 00:24:16.078 "model_number": "QEMU NVMe Ctrl", 00:24:16.078 "serial_number": "12341", 00:24:16.078 "firmware_revision": "8.0.0", 00:24:16.078 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:16.078 "oacs": { 00:24:16.078 "security": 0, 00:24:16.078 "format": 1, 00:24:16.078 "firmware": 0, 00:24:16.078 "ns_manage": 1 00:24:16.078 }, 00:24:16.078 "multi_ctrlr": false, 00:24:16.078 "ana_reporting": false 00:24:16.078 }, 00:24:16.078 "vs": { 00:24:16.078 "nvme_version": "1.4" 00:24:16.078 }, 00:24:16.078 "ns_data": { 00:24:16.078 "id": 1, 00:24:16.078 "can_share": false 00:24:16.078 } 00:24:16.078 } 00:24:16.078 ], 00:24:16.078 "mp_policy": "active_passive" 00:24:16.078 } 00:24:16.078 } 00:24:16.078 ]' 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:16.078 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:16.336 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0c15199b-6829-48a9-bd36-5f1e3a9e6738 00:24:16.336 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:16.336 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c15199b-6829-48a9-bd36-5f1e3a9e6738 00:24:16.604 18:16:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:16.871 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0254e4f0-d091-4fe6-85cc-537b69a4997e 00:24:16.871 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0254e4f0-d091-4fe6-85cc-537b69a4997e 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:17.130 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:17.388 { 00:24:17.388 "name": "90f6fd3c-2c6a-4665-853f-0b65e7bb4a45", 00:24:17.388 "aliases": [ 00:24:17.388 "lvs/nvme0n1p0" 00:24:17.388 ], 00:24:17.388 "product_name": "Logical Volume", 00:24:17.388 "block_size": 4096, 00:24:17.388 "num_blocks": 26476544, 00:24:17.388 "uuid": "90f6fd3c-2c6a-4665-853f-0b65e7bb4a45", 00:24:17.388 "assigned_rate_limits": { 00:24:17.388 "rw_ios_per_sec": 0, 00:24:17.388 "rw_mbytes_per_sec": 0, 00:24:17.388 "r_mbytes_per_sec": 0, 00:24:17.388 "w_mbytes_per_sec": 0 00:24:17.388 }, 00:24:17.388 "claimed": false, 00:24:17.388 "zoned": false, 00:24:17.388 "supported_io_types": { 00:24:17.388 "read": true, 00:24:17.388 "write": true, 00:24:17.388 "unmap": true, 00:24:17.388 "write_zeroes": true, 00:24:17.388 "flush": false, 00:24:17.388 "reset": true, 00:24:17.388 "compare": false, 00:24:17.388 "compare_and_write": false, 00:24:17.388 "abort": false, 00:24:17.388 "nvme_admin": false, 00:24:17.388 "nvme_io": false 00:24:17.388 }, 00:24:17.388 "driver_specific": { 00:24:17.388 "lvol": { 00:24:17.388 "lvol_store_uuid": "0254e4f0-d091-4fe6-85cc-537b69a4997e", 00:24:17.388 "base_bdev": "nvme0n1", 00:24:17.388 "thin_provision": true, 00:24:17.388 "num_allocated_clusters": 0, 00:24:17.388 "snapshot": false, 00:24:17.388 "clone": false, 00:24:17.388 "esnap_clone": false 00:24:17.388 } 00:24:17.388 } 00:24:17.388 } 00:24:17.388 ]' 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:17.388 18:16:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:17.954 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:17.954 { 00:24:17.954 "name": "90f6fd3c-2c6a-4665-853f-0b65e7bb4a45", 00:24:17.954 "aliases": [ 00:24:17.954 "lvs/nvme0n1p0" 00:24:17.954 ], 00:24:17.954 "product_name": "Logical Volume", 00:24:17.954 "block_size": 4096, 00:24:17.954 "num_blocks": 26476544, 00:24:17.954 "uuid": "90f6fd3c-2c6a-4665-853f-0b65e7bb4a45", 00:24:17.954 "assigned_rate_limits": { 00:24:17.954 "rw_ios_per_sec": 0, 00:24:17.954 "rw_mbytes_per_sec": 0, 00:24:17.954 "r_mbytes_per_sec": 0, 00:24:17.954 "w_mbytes_per_sec": 0 00:24:17.954 }, 00:24:17.954 "claimed": false, 00:24:17.954 "zoned": false, 00:24:17.954 "supported_io_types": { 00:24:17.954 "read": true, 00:24:17.954 "write": true, 00:24:17.954 "unmap": true, 00:24:17.954 "write_zeroes": true, 00:24:17.954 "flush": false, 00:24:17.954 "reset": true, 00:24:17.954 "compare": false, 00:24:17.954 "compare_and_write": false, 00:24:17.954 "abort": false, 00:24:17.954 "nvme_admin": false, 00:24:17.954 "nvme_io": false 00:24:17.954 }, 00:24:17.954 "driver_specific": { 00:24:17.954 "lvol": { 00:24:17.955 "lvol_store_uuid": "0254e4f0-d091-4fe6-85cc-537b69a4997e", 00:24:17.955 "base_bdev": "nvme0n1", 00:24:17.955 "thin_provision": true, 00:24:17.955 "num_allocated_clusters": 0, 00:24:17.955 "snapshot": false, 00:24:17.955 "clone": false, 00:24:17.955 "esnap_clone": false 00:24:17.955 } 00:24:17.955 } 00:24:17.955 } 00:24:17.955 ]' 00:24:17.955 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:18.213 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:18.214 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:18.214 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:24:18.214 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:24:18.214 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:24:18.214 18:16:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:18.214 18:16:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:18.472 18:16:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:18.472 18:16:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:18.472 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:18.472 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:18.472 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:18.472 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:18.472 18:16:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:18.730 { 00:24:18.730 "name": "90f6fd3c-2c6a-4665-853f-0b65e7bb4a45", 00:24:18.730 "aliases": [ 00:24:18.730 "lvs/nvme0n1p0" 00:24:18.730 ], 00:24:18.730 "product_name": "Logical Volume", 00:24:18.730 "block_size": 4096, 00:24:18.730 "num_blocks": 26476544, 00:24:18.730 "uuid": "90f6fd3c-2c6a-4665-853f-0b65e7bb4a45", 00:24:18.730 "assigned_rate_limits": { 00:24:18.730 "rw_ios_per_sec": 0, 00:24:18.730 "rw_mbytes_per_sec": 0, 00:24:18.730 "r_mbytes_per_sec": 0, 00:24:18.730 "w_mbytes_per_sec": 0 00:24:18.730 }, 00:24:18.730 "claimed": false, 00:24:18.730 "zoned": false, 00:24:18.730 "supported_io_types": { 00:24:18.730 "read": true, 00:24:18.730 "write": true, 00:24:18.730 "unmap": true, 00:24:18.730 "write_zeroes": true, 00:24:18.730 "flush": false, 00:24:18.730 "reset": true, 00:24:18.730 "compare": false, 00:24:18.730 "compare_and_write": false, 00:24:18.730 "abort": false, 00:24:18.730 "nvme_admin": false, 00:24:18.730 "nvme_io": false 00:24:18.730 }, 00:24:18.730 "driver_specific": { 00:24:18.730 "lvol": { 00:24:18.730 "lvol_store_uuid": "0254e4f0-d091-4fe6-85cc-537b69a4997e", 00:24:18.730 "base_bdev": "nvme0n1", 00:24:18.730 "thin_provision": true, 00:24:18.730 "num_allocated_clusters": 0, 00:24:18.730 "snapshot": false, 00:24:18.730 "clone": false, 00:24:18.730 "esnap_clone": false 00:24:18.730 } 00:24:18.730 } 00:24:18.730 } 00:24:18.730 ]' 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 --l2p_dram_limit 10' 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:18.730 18:16:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 90f6fd3c-2c6a-4665-853f-0b65e7bb4a45 --l2p_dram_limit 10 -c nvc0n1p0 00:24:18.991 [2024-05-15 18:16:11.394631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.394708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:18.991 [2024-05-15 18:16:11.394752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:18.991 [2024-05-15 18:16:11.394766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.394845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.394865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:18.991 [2024-05-15 18:16:11.394885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:18.991 [2024-05-15 18:16:11.394898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.394932] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:18.991 [2024-05-15 18:16:11.395956] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:18.991 [2024-05-15 18:16:11.395997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.396012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:18.991 [2024-05-15 18:16:11.396031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:24:18.991 [2024-05-15 18:16:11.396044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.396203] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID edaff169-3ef7-4aa7-ab5f-0876c2bbcd36 00:24:18.991 [2024-05-15 18:16:11.398093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.398136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:18.991 [2024-05-15 18:16:11.398173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:18.991 [2024-05-15 18:16:11.398187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.408239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.408309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:18.991 [2024-05-15 18:16:11.408330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.994 ms 00:24:18.991 [2024-05-15 18:16:11.408346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.408481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.408505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:18.991 [2024-05-15 18:16:11.408520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:18.991 [2024-05-15 18:16:11.408535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.408621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.408646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:18.991 [2024-05-15 18:16:11.408661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:18.991 [2024-05-15 18:16:11.408676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.408709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:18.991 [2024-05-15 18:16:11.414232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.414269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:18.991 [2024-05-15 18:16:11.414322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:24:18.991 [2024-05-15 18:16:11.414367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.414418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.991 [2024-05-15 18:16:11.414433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:18.991 [2024-05-15 18:16:11.414448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:18.991 [2024-05-15 18:16:11.414460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.991 [2024-05-15 18:16:11.414506] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:18.991 [2024-05-15 18:16:11.414675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:18.992 [2024-05-15 18:16:11.414704] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:18.992 [2024-05-15 18:16:11.414722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:18.992 [2024-05-15 18:16:11.414744] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:18.992 [2024-05-15 18:16:11.414759] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:18.992 [2024-05-15 18:16:11.414774] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:18.992 [2024-05-15 18:16:11.414787] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:18.992 [2024-05-15 18:16:11.414801] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:18.992 [2024-05-15 18:16:11.414813] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:18.992 [2024-05-15 18:16:11.414828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.992 [2024-05-15 18:16:11.414841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:18.992 [2024-05-15 18:16:11.414861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:24:18.992 [2024-05-15 18:16:11.414874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.992 [2024-05-15 18:16:11.414952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.992 [2024-05-15 18:16:11.415013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:18.992 [2024-05-15 18:16:11.415029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:18.992 [2024-05-15 18:16:11.415042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.992 [2024-05-15 18:16:11.415130] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:18.992 [2024-05-15 18:16:11.415155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:18.992 [2024-05-15 18:16:11.415174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:18.992 [2024-05-15 18:16:11.415217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:18.992 [2024-05-15 18:16:11.415257] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.992 [2024-05-15 18:16:11.415282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:18.992 [2024-05-15 18:16:11.415294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:18.992 [2024-05-15 18:16:11.415323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.992 [2024-05-15 18:16:11.415335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:18.992 [2024-05-15 18:16:11.415366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:18.992 [2024-05-15 18:16:11.415393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:18.992 [2024-05-15 18:16:11.415432] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:18.992 [2024-05-15 18:16:11.415451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:18.992 [2024-05-15 18:16:11.415479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:18.992 [2024-05-15 18:16:11.415491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415505] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:18.992 [2024-05-15 18:16:11.415517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:18.992 [2024-05-15 18:16:11.415555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415580] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:18.992 [2024-05-15 18:16:11.415592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:18.992 [2024-05-15 18:16:11.415630] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:18.992 [2024-05-15 18:16:11.415685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.992 [2024-05-15 18:16:11.415743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:18.992 [2024-05-15 18:16:11.415757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:18.992 [2024-05-15 18:16:11.415769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.992 [2024-05-15 18:16:11.415784] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:18.992 [2024-05-15 18:16:11.415797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:18.992 [2024-05-15 18:16:11.415812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.992 [2024-05-15 18:16:11.415840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:18.992 [2024-05-15 18:16:11.415852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:18.992 [2024-05-15 18:16:11.415866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:18.992 [2024-05-15 18:16:11.415878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:18.992 [2024-05-15 18:16:11.415892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:18.992 [2024-05-15 18:16:11.415904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:18.992 [2024-05-15 18:16:11.415934] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:18.992 [2024-05-15 18:16:11.415954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.992 [2024-05-15 18:16:11.415970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:18.992 [2024-05-15 18:16:11.415983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:18.992 [2024-05-15 18:16:11.416000] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:18.992 [2024-05-15 18:16:11.416013] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:18.992 [2024-05-15 18:16:11.416027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:18.992 [2024-05-15 18:16:11.416040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:18.992 [2024-05-15 18:16:11.416055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:18.992 [2024-05-15 18:16:11.416068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:18.992 [2024-05-15 18:16:11.416082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:18.992 [2024-05-15 18:16:11.416095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:18.992 [2024-05-15 18:16:11.416110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:18.992 [2024-05-15 18:16:11.416123] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:18.992 [2024-05-15 18:16:11.416139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:18.992 [2024-05-15 18:16:11.416152] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:18.992 [2024-05-15 18:16:11.416172] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.992 [2024-05-15 18:16:11.416186] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:18.992 [2024-05-15 18:16:11.416201] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:18.992 [2024-05-15 18:16:11.416214] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:18.992 [2024-05-15 18:16:11.416229] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:18.992 [2024-05-15 18:16:11.416243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.992 [2024-05-15 18:16:11.416262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:18.992 [2024-05-15 18:16:11.416277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:24:18.992 [2024-05-15 18:16:11.416309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.992 [2024-05-15 18:16:11.438969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.992 [2024-05-15 18:16:11.439031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:18.992 [2024-05-15 18:16:11.439052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.580 ms 00:24:18.992 [2024-05-15 18:16:11.439067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.992 [2024-05-15 18:16:11.439196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.992 [2024-05-15 18:16:11.439216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:18.992 [2024-05-15 18:16:11.439231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:18.992 [2024-05-15 18:16:11.439247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.992 [2024-05-15 18:16:11.482455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.992 [2024-05-15 18:16:11.482555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:18.992 [2024-05-15 18:16:11.482577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.110 ms 00:24:18.992 [2024-05-15 18:16:11.482608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.993 [2024-05-15 18:16:11.482697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.993 [2024-05-15 18:16:11.482715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:18.993 [2024-05-15 18:16:11.482729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:18.993 [2024-05-15 18:16:11.482743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.993 [2024-05-15 18:16:11.483440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.993 [2024-05-15 18:16:11.483498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:18.993 [2024-05-15 18:16:11.483515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:24:18.993 [2024-05-15 18:16:11.483532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.993 [2024-05-15 18:16:11.483685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.993 [2024-05-15 18:16:11.483708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:18.993 [2024-05-15 18:16:11.483722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:18.993 [2024-05-15 18:16:11.483738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.251 [2024-05-15 18:16:11.505522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.251 [2024-05-15 18:16:11.505581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:19.251 [2024-05-15 18:16:11.505601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.755 ms 00:24:19.251 [2024-05-15 18:16:11.505616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.251 [2024-05-15 18:16:11.520546] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:19.251 [2024-05-15 18:16:11.524721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.252 [2024-05-15 18:16:11.524774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:19.252 [2024-05-15 18:16:11.524799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.942 ms 00:24:19.252 [2024-05-15 18:16:11.524813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.252 [2024-05-15 18:16:11.594193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.252 [2024-05-15 18:16:11.594258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:19.252 [2024-05-15 18:16:11.594299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.324 ms 00:24:19.252 [2024-05-15 18:16:11.594343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.252 [2024-05-15 18:16:11.594421] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:24:19.252 [2024-05-15 18:16:11.594460] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:24:21.841 [2024-05-15 18:16:14.200522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.841 [2024-05-15 18:16:14.200593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:21.841 [2024-05-15 18:16:14.200624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2606.090 ms 00:24:21.841 [2024-05-15 18:16:14.200638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.841 [2024-05-15 18:16:14.200882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.841 [2024-05-15 18:16:14.200902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:21.841 [2024-05-15 18:16:14.200920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:24:21.841 [2024-05-15 18:16:14.200933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.841 [2024-05-15 18:16:14.231631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.841 [2024-05-15 18:16:14.231683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:21.841 [2024-05-15 18:16:14.231737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.621 ms 00:24:21.841 [2024-05-15 18:16:14.231751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.841 [2024-05-15 18:16:14.262067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.841 [2024-05-15 18:16:14.262116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:21.841 [2024-05-15 18:16:14.262156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.261 ms 00:24:21.841 [2024-05-15 18:16:14.262169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.841 [2024-05-15 18:16:14.262642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.841 [2024-05-15 18:16:14.262672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:21.841 [2024-05-15 18:16:14.262691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:24:21.841 [2024-05-15 18:16:14.262703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.099 [2024-05-15 18:16:14.341523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.099 [2024-05-15 18:16:14.341583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:22.099 [2024-05-15 18:16:14.341643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.748 ms 00:24:22.099 [2024-05-15 18:16:14.341657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.099 [2024-05-15 18:16:14.374782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.099 [2024-05-15 18:16:14.374879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:22.099 [2024-05-15 18:16:14.374918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.045 ms 00:24:22.099 [2024-05-15 18:16:14.374932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.099 [2024-05-15 18:16:14.377207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.099 [2024-05-15 18:16:14.377254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:22.099 [2024-05-15 18:16:14.377275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:24:22.099 [2024-05-15 18:16:14.377288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.099 [2024-05-15 18:16:14.408033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.099 [2024-05-15 18:16:14.408090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.099 [2024-05-15 18:16:14.408113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.641 ms 00:24:22.099 [2024-05-15 18:16:14.408127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.099 [2024-05-15 18:16:14.408192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.099 [2024-05-15 18:16:14.408212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.099 [2024-05-15 18:16:14.408229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:22.099 [2024-05-15 18:16:14.408242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.099 [2024-05-15 18:16:14.408437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.099 [2024-05-15 18:16:14.408489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.099 [2024-05-15 18:16:14.408517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:22.099 [2024-05-15 18:16:14.408530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.099 [2024-05-15 18:16:14.409909] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3014.652 ms, result 0 00:24:22.099 { 00:24:22.099 "name": "ftl0", 00:24:22.099 "uuid": "edaff169-3ef7-4aa7-ab5f-0876c2bbcd36" 00:24:22.099 } 00:24:22.099 18:16:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:22.099 18:16:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:22.358 18:16:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:22.358 18:16:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:22.358 18:16:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:22.617 /dev/nbd0 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:22.617 1+0 records in 00:24:22.617 1+0 records out 00:24:22.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293097 s, 14.0 MB/s 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:24:22.617 18:16:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:22.876 [2024-05-15 18:16:15.128635] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:22.876 [2024-05-15 18:16:15.128821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81404 ] 00:24:22.876 [2024-05-15 18:16:15.302706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.136 [2024-05-15 18:16:15.557993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.060  Copying: 167/1024 [MB] (167 MBps) Copying: 333/1024 [MB] (166 MBps) Copying: 506/1024 [MB] (172 MBps) Copying: 676/1024 [MB] (170 MBps) Copying: 842/1024 [MB] (165 MBps) Copying: 1008/1024 [MB] (165 MBps) Copying: 1024/1024 [MB] (average 167 MBps) 00:24:31.060 00:24:31.060 18:16:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:32.964 18:16:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:32.964 [2024-05-15 18:16:25.369197] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:32.964 [2024-05-15 18:16:25.369366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81510 ] 00:24:33.225 [2024-05-15 18:16:25.532749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.484 [2024-05-15 18:16:25.771058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.341  Copying: 15/1024 [MB] (15 MBps) Copying: 31/1024 [MB] (15 MBps) Copying: 47/1024 [MB] (16 MBps) Copying: 63/1024 [MB] (15 MBps) Copying: 77/1024 [MB] (13 MBps) Copying: 91/1024 [MB] (14 MBps) Copying: 107/1024 [MB] (15 MBps) Copying: 123/1024 [MB] (16 MBps) Copying: 138/1024 [MB] (15 MBps) Copying: 154/1024 [MB] (15 MBps) Copying: 170/1024 [MB] (16 MBps) Copying: 186/1024 [MB] (15 MBps) Copying: 200/1024 [MB] (14 MBps) Copying: 215/1024 [MB] (15 MBps) Copying: 231/1024 [MB] (15 MBps) Copying: 246/1024 [MB] (15 MBps) Copying: 262/1024 [MB] (16 MBps) Copying: 278/1024 [MB] (16 MBps) Copying: 294/1024 [MB] (15 MBps) Copying: 310/1024 [MB] (15 MBps) Copying: 326/1024 [MB] (16 MBps) Copying: 342/1024 [MB] (15 MBps) Copying: 358/1024 [MB] (15 MBps) Copying: 373/1024 [MB] (15 MBps) Copying: 388/1024 [MB] (15 MBps) Copying: 403/1024 [MB] (14 MBps) Copying: 419/1024 [MB] (15 MBps) Copying: 434/1024 [MB] (15 MBps) Copying: 449/1024 [MB] (15 MBps) Copying: 464/1024 [MB] (15 MBps) Copying: 480/1024 [MB] (15 MBps) Copying: 496/1024 [MB] (15 MBps) Copying: 511/1024 [MB] (15 MBps) Copying: 526/1024 [MB] (15 MBps) Copying: 542/1024 [MB] (15 MBps) Copying: 557/1024 [MB] (15 MBps) Copying: 573/1024 [MB] (15 MBps) Copying: 588/1024 [MB] (15 MBps) Copying: 604/1024 [MB] (15 MBps) Copying: 619/1024 [MB] (15 MBps) Copying: 636/1024 [MB] (16 MBps) Copying: 652/1024 [MB] (15 MBps) Copying: 668/1024 [MB] (15 MBps) Copying: 684/1024 [MB] (16 MBps) Copying: 700/1024 [MB] (16 MBps) Copying: 715/1024 [MB] (14 MBps) Copying: 731/1024 [MB] (15 MBps) Copying: 747/1024 [MB] (16 MBps) Copying: 764/1024 [MB] (16 MBps) Copying: 780/1024 [MB] (16 MBps) Copying: 797/1024 [MB] (17 MBps) Copying: 814/1024 [MB] (16 MBps) Copying: 831/1024 [MB] (17 MBps) Copying: 848/1024 [MB] (16 MBps) Copying: 865/1024 [MB] (16 MBps) Copying: 881/1024 [MB] (16 MBps) Copying: 897/1024 [MB] (16 MBps) Copying: 912/1024 [MB] (15 MBps) Copying: 928/1024 [MB] (15 MBps) Copying: 943/1024 [MB] (15 MBps) Copying: 958/1024 [MB] (15 MBps) Copying: 973/1024 [MB] (14 MBps) Copying: 989/1024 [MB] (16 MBps) Copying: 1004/1024 [MB] (15 MBps) Copying: 1019/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:25:40.341 00:25:40.341 18:17:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:40.342 18:17:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:40.600 18:17:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:40.600 [2024-05-15 18:17:33.085309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.600 [2024-05-15 18:17:33.085381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:40.600 [2024-05-15 18:17:33.085404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:40.600 [2024-05-15 18:17:33.085422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.600 [2024-05-15 18:17:33.085476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:40.600 [2024-05-15 18:17:33.089548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.600 [2024-05-15 18:17:33.089834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:40.600 [2024-05-15 18:17:33.089971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:25:40.600 [2024-05-15 18:17:33.090035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.600 [2024-05-15 18:17:33.092048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.600 [2024-05-15 18:17:33.092213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:40.600 [2024-05-15 18:17:33.092370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:25:40.600 [2024-05-15 18:17:33.092396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.109536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.109589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:40.859 [2024-05-15 18:17:33.109614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.098 ms 00:25:40.859 [2024-05-15 18:17:33.109632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.116289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.116336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:40.859 [2024-05-15 18:17:33.116362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.604 ms 00:25:40.859 [2024-05-15 18:17:33.116375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.146665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.146729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:40.859 [2024-05-15 18:17:33.146769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.171 ms 00:25:40.859 [2024-05-15 18:17:33.146781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.166169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.166226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:40.859 [2024-05-15 18:17:33.166251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.328 ms 00:25:40.859 [2024-05-15 18:17:33.166265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.166501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.166529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:40.859 [2024-05-15 18:17:33.166547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:25:40.859 [2024-05-15 18:17:33.166560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.199408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.199518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:40.859 [2024-05-15 18:17:33.199558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.813 ms 00:25:40.859 [2024-05-15 18:17:33.199572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.227928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.228043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:40.859 [2024-05-15 18:17:33.228087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.282 ms 00:25:40.859 [2024-05-15 18:17:33.228100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.255810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.255886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:40.859 [2024-05-15 18:17:33.255925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.612 ms 00:25:40.859 [2024-05-15 18:17:33.255960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.282688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.859 [2024-05-15 18:17:33.282761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:40.859 [2024-05-15 18:17:33.282800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.529 ms 00:25:40.859 [2024-05-15 18:17:33.282828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.859 [2024-05-15 18:17:33.282883] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:40.859 [2024-05-15 18:17:33.282935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.282966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.282979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.282993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.283966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:40.860 [2024-05-15 18:17:33.284442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:40.861 [2024-05-15 18:17:33.284564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:40.861 [2024-05-15 18:17:33.284579] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: edaff169-3ef7-4aa7-ab5f-0876c2bbcd36 00:25:40.861 [2024-05-15 18:17:33.284592] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:40.861 [2024-05-15 18:17:33.284606] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:40.861 [2024-05-15 18:17:33.284617] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:40.861 [2024-05-15 18:17:33.284632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:40.861 [2024-05-15 18:17:33.284644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:40.861 [2024-05-15 18:17:33.284674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:40.861 [2024-05-15 18:17:33.284704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:40.861 [2024-05-15 18:17:33.284717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:40.861 [2024-05-15 18:17:33.284727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:40.861 [2024-05-15 18:17:33.284742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.861 [2024-05-15 18:17:33.284754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:40.861 [2024-05-15 18:17:33.284772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:25:40.861 [2024-05-15 18:17:33.284799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.861 [2024-05-15 18:17:33.301420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.861 [2024-05-15 18:17:33.301484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:40.861 [2024-05-15 18:17:33.301524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.533 ms 00:25:40.861 [2024-05-15 18:17:33.301536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.861 [2024-05-15 18:17:33.301802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.861 [2024-05-15 18:17:33.301818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:40.861 [2024-05-15 18:17:33.301833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:25:40.861 [2024-05-15 18:17:33.301844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.861 [2024-05-15 18:17:33.356912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.861 [2024-05-15 18:17:33.356987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.861 [2024-05-15 18:17:33.357028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.861 [2024-05-15 18:17:33.357045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.861 [2024-05-15 18:17:33.357141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.861 [2024-05-15 18:17:33.357157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.861 [2024-05-15 18:17:33.357176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.861 [2024-05-15 18:17:33.357203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.861 [2024-05-15 18:17:33.357417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.861 [2024-05-15 18:17:33.357454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.861 [2024-05-15 18:17:33.357472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.861 [2024-05-15 18:17:33.357484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.861 [2024-05-15 18:17:33.357523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.861 [2024-05-15 18:17:33.357538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.861 [2024-05-15 18:17:33.357554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.861 [2024-05-15 18:17:33.357566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.458715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.458785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.121 [2024-05-15 18:17:33.458823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.458836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.497248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.497348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.121 [2024-05-15 18:17:33.497390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.497403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.497510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.497528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.121 [2024-05-15 18:17:33.497543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.497562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.497627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.497655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.121 [2024-05-15 18:17:33.497669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.497680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.497822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.497841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.121 [2024-05-15 18:17:33.497856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.497867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.497924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.497941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:41.121 [2024-05-15 18:17:33.497962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.497974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.498025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.498047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.121 [2024-05-15 18:17:33.498063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.498074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.498131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.121 [2024-05-15 18:17:33.498149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.121 [2024-05-15 18:17:33.498163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.121 [2024-05-15 18:17:33.498175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.121 [2024-05-15 18:17:33.498474] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.094 ms, result 0 00:25:41.121 true 00:25:41.121 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81262 00:25:41.121 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81262 00:25:41.121 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:41.380 [2024-05-15 18:17:33.627512] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:25:41.380 [2024-05-15 18:17:33.627695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82187 ] 00:25:41.380 [2024-05-15 18:17:33.792576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.639 [2024-05-15 18:17:34.037005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.019  Copying: 164/1024 [MB] (164 MBps) Copying: 324/1024 [MB] (160 MBps) Copying: 473/1024 [MB] (148 MBps) Copying: 600/1024 [MB] (127 MBps) Copying: 727/1024 [MB] (127 MBps) Copying: 869/1024 [MB] (141 MBps) Copying: 1024/1024 [MB] (average 147 MBps) 00:25:50.019 00:25:50.019 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81262 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:50.019 18:17:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:50.276 [2024-05-15 18:17:42.607229] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:25:50.276 [2024-05-15 18:17:42.607447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82280 ] 00:25:50.534 [2024-05-15 18:17:42.787524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.791 [2024-05-15 18:17:43.081003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.048 [2024-05-15 18:17:43.432814] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:51.048 [2024-05-15 18:17:43.432910] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:51.048 [2024-05-15 18:17:43.497230] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:51.048 [2024-05-15 18:17:43.497594] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:51.048 [2024-05-15 18:17:43.497824] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:51.308 [2024-05-15 18:17:43.746186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.746251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:51.308 [2024-05-15 18:17:43.746274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:51.308 [2024-05-15 18:17:43.746286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.746410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.746435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:51.308 [2024-05-15 18:17:43.746448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:51.308 [2024-05-15 18:17:43.746465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.746500] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:51.308 [2024-05-15 18:17:43.747417] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:51.308 [2024-05-15 18:17:43.747452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.747471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:51.308 [2024-05-15 18:17:43.747484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:25:51.308 [2024-05-15 18:17:43.747495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.749439] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:51.308 [2024-05-15 18:17:43.767028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.767075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:51.308 [2024-05-15 18:17:43.767093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.590 ms 00:25:51.308 [2024-05-15 18:17:43.767105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.767174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.767194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:51.308 [2024-05-15 18:17:43.767212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:51.308 [2024-05-15 18:17:43.767223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.775959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.776007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:51.308 [2024-05-15 18:17:43.776024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.592 ms 00:25:51.308 [2024-05-15 18:17:43.776036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.776142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.776163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:51.308 [2024-05-15 18:17:43.776176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:51.308 [2024-05-15 18:17:43.776187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.776246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.776264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:51.308 [2024-05-15 18:17:43.776277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:51.308 [2024-05-15 18:17:43.776288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.776360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:51.308 [2024-05-15 18:17:43.781420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.781464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:51.308 [2024-05-15 18:17:43.781481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.070 ms 00:25:51.308 [2024-05-15 18:17:43.781493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.781545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.781562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:51.308 [2024-05-15 18:17:43.781575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:51.308 [2024-05-15 18:17:43.781586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.781649] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:51.308 [2024-05-15 18:17:43.781683] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:51.308 [2024-05-15 18:17:43.781743] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:51.308 [2024-05-15 18:17:43.781776] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:51.308 [2024-05-15 18:17:43.781859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:51.308 [2024-05-15 18:17:43.781875] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:51.308 [2024-05-15 18:17:43.781890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:51.308 [2024-05-15 18:17:43.781906] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:51.308 [2024-05-15 18:17:43.781919] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:51.308 [2024-05-15 18:17:43.781931] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:51.308 [2024-05-15 18:17:43.781942] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:51.308 [2024-05-15 18:17:43.781953] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:51.308 [2024-05-15 18:17:43.781964] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:51.308 [2024-05-15 18:17:43.781977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.782004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:51.308 [2024-05-15 18:17:43.782016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:25:51.308 [2024-05-15 18:17:43.782028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.782112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.308 [2024-05-15 18:17:43.782131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:51.308 [2024-05-15 18:17:43.782144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:51.308 [2024-05-15 18:17:43.782154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.308 [2024-05-15 18:17:43.782246] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:51.308 [2024-05-15 18:17:43.782264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:51.308 [2024-05-15 18:17:43.782281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:51.308 [2024-05-15 18:17:43.782314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.308 [2024-05-15 18:17:43.782332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:51.308 [2024-05-15 18:17:43.782343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:51.308 [2024-05-15 18:17:43.782354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:51.308 [2024-05-15 18:17:43.782365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:51.308 [2024-05-15 18:17:43.782376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:51.308 [2024-05-15 18:17:43.782386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:51.308 [2024-05-15 18:17:43.782397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:51.308 [2024-05-15 18:17:43.782424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:51.308 [2024-05-15 18:17:43.782434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:51.308 [2024-05-15 18:17:43.782445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:51.308 [2024-05-15 18:17:43.782458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:51.308 [2024-05-15 18:17:43.782468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.308 [2024-05-15 18:17:43.782479] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:51.308 [2024-05-15 18:17:43.782490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:51.308 [2024-05-15 18:17:43.782506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.308 [2024-05-15 18:17:43.782516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:51.308 [2024-05-15 18:17:43.782526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:51.308 [2024-05-15 18:17:43.782537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:51.308 [2024-05-15 18:17:43.782547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:51.308 [2024-05-15 18:17:43.782557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:51.308 [2024-05-15 18:17:43.782567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:51.308 [2024-05-15 18:17:43.782577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:51.309 [2024-05-15 18:17:43.782588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:51.309 [2024-05-15 18:17:43.782598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:51.309 [2024-05-15 18:17:43.782608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:51.309 [2024-05-15 18:17:43.782618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:51.309 [2024-05-15 18:17:43.782628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:51.309 [2024-05-15 18:17:43.782638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:51.309 [2024-05-15 18:17:43.782648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:51.309 [2024-05-15 18:17:43.782658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:51.309 [2024-05-15 18:17:43.782668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:51.309 [2024-05-15 18:17:43.782678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:51.309 [2024-05-15 18:17:43.782688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:51.309 [2024-05-15 18:17:43.782698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:51.309 [2024-05-15 18:17:43.782708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:51.309 [2024-05-15 18:17:43.782717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:51.309 [2024-05-15 18:17:43.782727] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:51.309 [2024-05-15 18:17:43.782738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:51.309 [2024-05-15 18:17:43.782750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:51.309 [2024-05-15 18:17:43.782761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.309 [2024-05-15 18:17:43.782772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:51.309 [2024-05-15 18:17:43.782782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:51.309 [2024-05-15 18:17:43.782794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:51.309 [2024-05-15 18:17:43.782805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:51.309 [2024-05-15 18:17:43.782815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:51.309 [2024-05-15 18:17:43.782825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:51.309 [2024-05-15 18:17:43.782837] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:51.309 [2024-05-15 18:17:43.782851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:51.309 [2024-05-15 18:17:43.782863] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:51.309 [2024-05-15 18:17:43.782874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:51.309 [2024-05-15 18:17:43.782885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:51.309 [2024-05-15 18:17:43.782897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:51.309 [2024-05-15 18:17:43.782908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:51.309 [2024-05-15 18:17:43.782919] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:51.309 [2024-05-15 18:17:43.782930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:51.309 [2024-05-15 18:17:43.782942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:51.309 [2024-05-15 18:17:43.782953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:51.309 [2024-05-15 18:17:43.782964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:51.309 [2024-05-15 18:17:43.782975] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:51.309 [2024-05-15 18:17:43.782986] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:51.309 [2024-05-15 18:17:43.782998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:51.309 [2024-05-15 18:17:43.783008] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:51.309 [2024-05-15 18:17:43.783021] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:51.309 [2024-05-15 18:17:43.783039] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:51.309 [2024-05-15 18:17:43.783051] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:51.309 [2024-05-15 18:17:43.783063] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:51.309 [2024-05-15 18:17:43.783074] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:51.309 [2024-05-15 18:17:43.783087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.309 [2024-05-15 18:17:43.783098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:51.309 [2024-05-15 18:17:43.783109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:25:51.309 [2024-05-15 18:17:43.783120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.309 [2024-05-15 18:17:43.806747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.309 [2024-05-15 18:17:43.806808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:51.309 [2024-05-15 18:17:43.806843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.571 ms 00:25:51.309 [2024-05-15 18:17:43.806855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.309 [2024-05-15 18:17:43.806975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.309 [2024-05-15 18:17:43.806992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:51.309 [2024-05-15 18:17:43.807004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:51.309 [2024-05-15 18:17:43.807015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.859433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.859486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:51.568 [2024-05-15 18:17:43.859522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.344 ms 00:25:51.568 [2024-05-15 18:17:43.859534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.859600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.859617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.568 [2024-05-15 18:17:43.859630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:51.568 [2024-05-15 18:17:43.859641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.860293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.860328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.568 [2024-05-15 18:17:43.860344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:25:51.568 [2024-05-15 18:17:43.860371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.860540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.860559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.568 [2024-05-15 18:17:43.860571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:51.568 [2024-05-15 18:17:43.860582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.881014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.881060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.568 [2024-05-15 18:17:43.881095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.387 ms 00:25:51.568 [2024-05-15 18:17:43.881107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.898799] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:51.568 [2024-05-15 18:17:43.898844] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:51.568 [2024-05-15 18:17:43.898878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.898889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:51.568 [2024-05-15 18:17:43.898902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.618 ms 00:25:51.568 [2024-05-15 18:17:43.898917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.929890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.929934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:51.568 [2024-05-15 18:17:43.929975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.915 ms 00:25:51.568 [2024-05-15 18:17:43.929987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.945673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.945731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:51.568 [2024-05-15 18:17:43.945779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.628 ms 00:25:51.568 [2024-05-15 18:17:43.945806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.961928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.961971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:51.568 [2024-05-15 18:17:43.961988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.076 ms 00:25:51.568 [2024-05-15 18:17:43.961999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:43.962493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:43.962516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:51.568 [2024-05-15 18:17:43.962530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:25:51.568 [2024-05-15 18:17:43.962541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:44.044422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:44.044526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:51.568 [2024-05-15 18:17:44.044565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.837 ms 00:25:51.568 [2024-05-15 18:17:44.044578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:44.056965] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:51.568 [2024-05-15 18:17:44.060390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:44.060428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:51.568 [2024-05-15 18:17:44.060495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.733 ms 00:25:51.568 [2024-05-15 18:17:44.060507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:44.060605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:44.060625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:51.568 [2024-05-15 18:17:44.060671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:51.568 [2024-05-15 18:17:44.060682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:44.060795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:44.060825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:51.568 [2024-05-15 18:17:44.060839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:51.568 [2024-05-15 18:17:44.060850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:44.063148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:44.063204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:51.568 [2024-05-15 18:17:44.063235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.265 ms 00:25:51.568 [2024-05-15 18:17:44.063247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:44.063282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:44.063300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:51.568 [2024-05-15 18:17:44.063376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:51.568 [2024-05-15 18:17:44.063389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.568 [2024-05-15 18:17:44.063448] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:51.568 [2024-05-15 18:17:44.063468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.568 [2024-05-15 18:17:44.063480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:51.568 [2024-05-15 18:17:44.063492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:51.568 [2024-05-15 18:17:44.063503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.828 [2024-05-15 18:17:44.096442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.828 [2024-05-15 18:17:44.096504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:51.828 [2024-05-15 18:17:44.096523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.908 ms 00:25:51.828 [2024-05-15 18:17:44.096534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.828 [2024-05-15 18:17:44.096631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.828 [2024-05-15 18:17:44.096650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:51.828 [2024-05-15 18:17:44.096663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:51.828 [2024-05-15 18:17:44.096674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.828 [2024-05-15 18:17:44.098044] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.390 ms, result 0 00:26:33.191  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 77/1024 [MB] (26 MBps) Copying: 103/1024 [MB] (25 MBps) Copying: 127/1024 [MB] (24 MBps) Copying: 154/1024 [MB] (26 MBps) Copying: 180/1024 [MB] (26 MBps) Copying: 205/1024 [MB] (24 MBps) Copying: 234/1024 [MB] (29 MBps) Copying: 260/1024 [MB] (26 MBps) Copying: 283/1024 [MB] (23 MBps) Copying: 307/1024 [MB] (23 MBps) Copying: 331/1024 [MB] (24 MBps) Copying: 355/1024 [MB] (23 MBps) Copying: 379/1024 [MB] (23 MBps) Copying: 403/1024 [MB] (24 MBps) Copying: 427/1024 [MB] (24 MBps) Copying: 451/1024 [MB] (23 MBps) Copying: 474/1024 [MB] (22 MBps) Copying: 497/1024 [MB] (23 MBps) Copying: 520/1024 [MB] (23 MBps) Copying: 543/1024 [MB] (22 MBps) Copying: 569/1024 [MB] (25 MBps) Copying: 595/1024 [MB] (25 MBps) Copying: 620/1024 [MB] (25 MBps) Copying: 646/1024 [MB] (26 MBps) Copying: 674/1024 [MB] (27 MBps) Copying: 702/1024 [MB] (27 MBps) Copying: 729/1024 [MB] (27 MBps) Copying: 757/1024 [MB] (27 MBps) Copying: 782/1024 [MB] (25 MBps) Copying: 809/1024 [MB] (26 MBps) Copying: 836/1024 [MB] (27 MBps) Copying: 862/1024 [MB] (25 MBps) Copying: 887/1024 [MB] (25 MBps) Copying: 913/1024 [MB] (25 MBps) Copying: 938/1024 [MB] (24 MBps) Copying: 964/1024 [MB] (25 MBps) Copying: 992/1024 [MB] (28 MBps) Copying: 1016/1024 [MB] (24 MBps) Copying: 1048200/1048576 [kB] (6924 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-05-15 18:18:25.584691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.191 [2024-05-15 18:18:25.584765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:33.191 [2024-05-15 18:18:25.584801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:33.191 [2024-05-15 18:18:25.584813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.191 [2024-05-15 18:18:25.586170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:33.191 [2024-05-15 18:18:25.592162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.191 [2024-05-15 18:18:25.592370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:33.191 [2024-05-15 18:18:25.592500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.757 ms 00:26:33.191 [2024-05-15 18:18:25.592579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.191 [2024-05-15 18:18:25.605446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.191 [2024-05-15 18:18:25.605664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:33.191 [2024-05-15 18:18:25.605690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.721 ms 00:26:33.191 [2024-05-15 18:18:25.605704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.191 [2024-05-15 18:18:25.628333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.191 [2024-05-15 18:18:25.628379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:33.191 [2024-05-15 18:18:25.628412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.602 ms 00:26:33.191 [2024-05-15 18:18:25.628423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.191 [2024-05-15 18:18:25.634795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.191 [2024-05-15 18:18:25.634827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:33.191 [2024-05-15 18:18:25.634866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.319 ms 00:26:33.191 [2024-05-15 18:18:25.634877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.191 [2024-05-15 18:18:25.664120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.191 [2024-05-15 18:18:25.664162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:33.191 [2024-05-15 18:18:25.664180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.189 ms 00:26:33.191 [2024-05-15 18:18:25.664191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.191 [2024-05-15 18:18:25.680086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.191 [2024-05-15 18:18:25.680127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:33.191 [2024-05-15 18:18:25.680160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.852 ms 00:26:33.191 [2024-05-15 18:18:25.680171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.452 [2024-05-15 18:18:25.796962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.452 [2024-05-15 18:18:25.797068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:33.452 [2024-05-15 18:18:25.797091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.739 ms 00:26:33.452 [2024-05-15 18:18:25.797120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.452 [2024-05-15 18:18:25.826311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.452 [2024-05-15 18:18:25.826348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:33.452 [2024-05-15 18:18:25.826378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.167 ms 00:26:33.452 [2024-05-15 18:18:25.826388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.452 [2024-05-15 18:18:25.855289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.452 [2024-05-15 18:18:25.855410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:33.452 [2024-05-15 18:18:25.855432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.860 ms 00:26:33.452 [2024-05-15 18:18:25.855443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.452 [2024-05-15 18:18:25.885027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.452 [2024-05-15 18:18:25.885067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:33.452 [2024-05-15 18:18:25.885099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.509 ms 00:26:33.452 [2024-05-15 18:18:25.885124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.452 [2024-05-15 18:18:25.912723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.452 [2024-05-15 18:18:25.912760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:33.452 [2024-05-15 18:18:25.912791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.516 ms 00:26:33.452 [2024-05-15 18:18:25.912817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.452 [2024-05-15 18:18:25.912858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:33.452 [2024-05-15 18:18:25.912887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:26:33.452 [2024-05-15 18:18:25.912903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.912914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.912942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.912954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.912966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.912977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.912989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:33.452 [2024-05-15 18:18:25.913093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.913997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:33.453 [2024-05-15 18:18:25.914234] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:33.453 [2024-05-15 18:18:25.914246] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: edaff169-3ef7-4aa7-ab5f-0876c2bbcd36 00:26:33.453 [2024-05-15 18:18:25.914258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:26:33.453 [2024-05-15 18:18:25.914269] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:26:33.453 [2024-05-15 18:18:25.914287] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:26:33.453 [2024-05-15 18:18:25.914313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:26:33.453 [2024-05-15 18:18:25.914324] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:33.453 [2024-05-15 18:18:25.914348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:33.453 [2024-05-15 18:18:25.914362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:33.453 [2024-05-15 18:18:25.914372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:33.454 [2024-05-15 18:18:25.914382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:33.454 [2024-05-15 18:18:25.914394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.454 [2024-05-15 18:18:25.914405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:33.454 [2024-05-15 18:18:25.914418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:26:33.454 [2024-05-15 18:18:25.914429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.454 [2024-05-15 18:18:25.930660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.454 [2024-05-15 18:18:25.930694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:33.454 [2024-05-15 18:18:25.930725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.188 ms 00:26:33.454 [2024-05-15 18:18:25.930736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.454 [2024-05-15 18:18:25.930963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.454 [2024-05-15 18:18:25.930978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:33.454 [2024-05-15 18:18:25.930998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:26:33.454 [2024-05-15 18:18:25.931008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.713 [2024-05-15 18:18:25.976335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.713 [2024-05-15 18:18:25.976418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.713 [2024-05-15 18:18:25.976454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.713 [2024-05-15 18:18:25.976466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.713 [2024-05-15 18:18:25.976561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.713 [2024-05-15 18:18:25.976576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.713 [2024-05-15 18:18:25.976588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.713 [2024-05-15 18:18:25.976598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.713 [2024-05-15 18:18:25.976694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:25.976712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.714 [2024-05-15 18:18:25.976724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:25.976734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:25.976756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:25.976769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.714 [2024-05-15 18:18:25.976779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:25.976789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.083438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.083487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.714 [2024-05-15 18:18:26.083505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.083517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.119336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.119386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.714 [2024-05-15 18:18:26.119417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.119427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.119486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.119509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.714 [2024-05-15 18:18:26.119520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.119530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.119573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.119586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.714 [2024-05-15 18:18:26.119597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.119606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.119716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.119734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.714 [2024-05-15 18:18:26.119751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.119761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.119805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.119820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:33.714 [2024-05-15 18:18:26.119831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.119841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.119882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.119896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.714 [2024-05-15 18:18:26.119913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.119923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.120004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.714 [2024-05-15 18:18:26.120023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.714 [2024-05-15 18:18:26.120036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.714 [2024-05-15 18:18:26.120046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.714 [2024-05-15 18:18:26.120193] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.685 ms, result 0 00:26:35.624 00:26:35.624 00:26:35.624 18:18:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:37.529 18:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.788 [2024-05-15 18:18:30.032466] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:26:37.788 [2024-05-15 18:18:30.032711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82740 ] 00:26:37.788 [2024-05-15 18:18:30.211884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.047 [2024-05-15 18:18:30.480908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.616 [2024-05-15 18:18:30.838551] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.616 [2024-05-15 18:18:30.838645] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.616 [2024-05-15 18:18:30.996851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:30.996913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:38.616 [2024-05-15 18:18:30.996951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:38.616 [2024-05-15 18:18:30.996968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:30.997038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:30.997058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.616 [2024-05-15 18:18:30.997086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:38.616 [2024-05-15 18:18:30.997096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:30.997124] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:38.616 [2024-05-15 18:18:30.998039] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:38.616 [2024-05-15 18:18:30.998079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:30.998094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.616 [2024-05-15 18:18:30.998107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:26:38.616 [2024-05-15 18:18:30.998120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:31.000399] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:38.616 [2024-05-15 18:18:31.017782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:31.017843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:38.616 [2024-05-15 18:18:31.017878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.388 ms 00:26:38.616 [2024-05-15 18:18:31.017890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:31.017972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:31.017992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:38.616 [2024-05-15 18:18:31.018005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:38.616 [2024-05-15 18:18:31.018017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:31.027358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:31.027443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.616 [2024-05-15 18:18:31.027461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.225 ms 00:26:38.616 [2024-05-15 18:18:31.027473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:31.027615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:31.027635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.616 [2024-05-15 18:18:31.027648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:26:38.616 [2024-05-15 18:18:31.027660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:31.027743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:31.027760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:38.616 [2024-05-15 18:18:31.027772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:38.616 [2024-05-15 18:18:31.027783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.616 [2024-05-15 18:18:31.027855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:38.616 [2024-05-15 18:18:31.032993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.616 [2024-05-15 18:18:31.033028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.616 [2024-05-15 18:18:31.033042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:26:38.616 [2024-05-15 18:18:31.033053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.617 [2024-05-15 18:18:31.033100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.617 [2024-05-15 18:18:31.033115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:38.617 [2024-05-15 18:18:31.033127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:38.617 [2024-05-15 18:18:31.033137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.617 [2024-05-15 18:18:31.033203] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:38.617 [2024-05-15 18:18:31.033235] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:26:38.617 [2024-05-15 18:18:31.033289] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:38.617 [2024-05-15 18:18:31.033330] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:26:38.617 [2024-05-15 18:18:31.033408] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:26:38.617 [2024-05-15 18:18:31.033426] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:38.617 [2024-05-15 18:18:31.033440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:26:38.617 [2024-05-15 18:18:31.033461] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:38.617 [2024-05-15 18:18:31.033475] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:38.617 [2024-05-15 18:18:31.033487] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:38.617 [2024-05-15 18:18:31.033499] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:38.617 [2024-05-15 18:18:31.033509] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:26:38.617 [2024-05-15 18:18:31.033521] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:26:38.617 [2024-05-15 18:18:31.033532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.617 [2024-05-15 18:18:31.033543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:38.617 [2024-05-15 18:18:31.033555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:26:38.617 [2024-05-15 18:18:31.033566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.617 [2024-05-15 18:18:31.033652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.617 [2024-05-15 18:18:31.033670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:38.617 [2024-05-15 18:18:31.033682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:38.617 [2024-05-15 18:18:31.033692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.617 [2024-05-15 18:18:31.033840] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:38.617 [2024-05-15 18:18:31.033857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:38.617 [2024-05-15 18:18:31.033877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.617 [2024-05-15 18:18:31.033889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.617 [2024-05-15 18:18:31.033902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:38.617 [2024-05-15 18:18:31.033913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:38.617 [2024-05-15 18:18:31.033925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:38.617 [2024-05-15 18:18:31.033940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:38.617 [2024-05-15 18:18:31.033952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:38.617 [2024-05-15 18:18:31.033963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.617 [2024-05-15 18:18:31.033974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:38.617 [2024-05-15 18:18:31.033985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:38.617 [2024-05-15 18:18:31.033997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.617 [2024-05-15 18:18:31.034008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:38.617 [2024-05-15 18:18:31.034020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:26:38.617 [2024-05-15 18:18:31.034054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:38.617 [2024-05-15 18:18:31.034077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:26:38.617 [2024-05-15 18:18:31.034088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034130] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:26:38.617 [2024-05-15 18:18:31.034141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:26:38.617 [2024-05-15 18:18:31.034152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:26:38.617 [2024-05-15 18:18:31.034164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:38.617 [2024-05-15 18:18:31.034175] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:38.617 [2024-05-15 18:18:31.034197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:38.617 [2024-05-15 18:18:31.034208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:38.617 [2024-05-15 18:18:31.034230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:38.617 [2024-05-15 18:18:31.034241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:38.617 [2024-05-15 18:18:31.034263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:38.617 [2024-05-15 18:18:31.034274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:38.617 [2024-05-15 18:18:31.034296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:38.617 [2024-05-15 18:18:31.034306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.617 [2024-05-15 18:18:31.034328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:38.617 [2024-05-15 18:18:31.034340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:26:38.617 [2024-05-15 18:18:31.034352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.617 [2024-05-15 18:18:31.034364] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:38.617 [2024-05-15 18:18:31.034375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:38.617 [2024-05-15 18:18:31.034394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.617 [2024-05-15 18:18:31.034424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.617 [2024-05-15 18:18:31.034437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:38.617 [2024-05-15 18:18:31.034449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:38.617 [2024-05-15 18:18:31.034460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:38.617 [2024-05-15 18:18:31.034471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:38.617 [2024-05-15 18:18:31.034482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:38.617 [2024-05-15 18:18:31.034494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:38.617 [2024-05-15 18:18:31.034506] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:38.617 [2024-05-15 18:18:31.034521] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.617 [2024-05-15 18:18:31.034534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:38.617 [2024-05-15 18:18:31.034546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:26:38.617 [2024-05-15 18:18:31.034559] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:26:38.617 [2024-05-15 18:18:31.034571] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:26:38.617 [2024-05-15 18:18:31.034583] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:26:38.617 [2024-05-15 18:18:31.034596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:26:38.617 [2024-05-15 18:18:31.034607] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:26:38.617 [2024-05-15 18:18:31.034620] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:26:38.617 [2024-05-15 18:18:31.034632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:26:38.617 [2024-05-15 18:18:31.034644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:26:38.617 [2024-05-15 18:18:31.034656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:26:38.617 [2024-05-15 18:18:31.034669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:26:38.617 [2024-05-15 18:18:31.034681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:26:38.617 [2024-05-15 18:18:31.034693] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:38.617 [2024-05-15 18:18:31.034706] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.617 [2024-05-15 18:18:31.034719] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:38.617 [2024-05-15 18:18:31.034732] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:38.617 [2024-05-15 18:18:31.034744] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:38.617 [2024-05-15 18:18:31.034757] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:38.617 [2024-05-15 18:18:31.034771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.617 [2024-05-15 18:18:31.034784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:38.617 [2024-05-15 18:18:31.034796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:26:38.617 [2024-05-15 18:18:31.034820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.617 [2024-05-15 18:18:31.057463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.617 [2024-05-15 18:18:31.057507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.618 [2024-05-15 18:18:31.057540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.583 ms 00:26:38.618 [2024-05-15 18:18:31.057551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.618 [2024-05-15 18:18:31.057666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.618 [2024-05-15 18:18:31.057681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:38.618 [2024-05-15 18:18:31.057694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:38.618 [2024-05-15 18:18:31.057705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.116738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.116801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.878 [2024-05-15 18:18:31.116829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.951 ms 00:26:38.878 [2024-05-15 18:18:31.116842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.116913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.116931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.878 [2024-05-15 18:18:31.116946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:38.878 [2024-05-15 18:18:31.116958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.117588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.117616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.878 [2024-05-15 18:18:31.117631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:26:38.878 [2024-05-15 18:18:31.117650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.117805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.117831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.878 [2024-05-15 18:18:31.117846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:26:38.878 [2024-05-15 18:18:31.117858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.138825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.138871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.878 [2024-05-15 18:18:31.138905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.937 ms 00:26:38.878 [2024-05-15 18:18:31.138917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.156798] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:38.878 [2024-05-15 18:18:31.156841] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:38.878 [2024-05-15 18:18:31.156876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.156889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:38.878 [2024-05-15 18:18:31.156904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.789 ms 00:26:38.878 [2024-05-15 18:18:31.156916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.188185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.188230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:38.878 [2024-05-15 18:18:31.188248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.220 ms 00:26:38.878 [2024-05-15 18:18:31.188261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.204480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.204524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:38.878 [2024-05-15 18:18:31.204557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.153 ms 00:26:38.878 [2024-05-15 18:18:31.204569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.220130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.220185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:38.878 [2024-05-15 18:18:31.220202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.502 ms 00:26:38.878 [2024-05-15 18:18:31.220213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.220810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.220847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:38.878 [2024-05-15 18:18:31.220863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:26:38.878 [2024-05-15 18:18:31.220875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.302043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.302115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:38.878 [2024-05-15 18:18:31.302168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.140 ms 00:26:38.878 [2024-05-15 18:18:31.302195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.315002] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.878 [2024-05-15 18:18:31.318627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.318679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.878 [2024-05-15 18:18:31.318711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.333 ms 00:26:38.878 [2024-05-15 18:18:31.318727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.318839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.318857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.878 [2024-05-15 18:18:31.318870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:38.878 [2024-05-15 18:18:31.318882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.320618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.320654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.878 [2024-05-15 18:18:31.320669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:26:38.878 [2024-05-15 18:18:31.320680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.323021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.323061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:26:38.878 [2024-05-15 18:18:31.323093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:26:38.878 [2024-05-15 18:18:31.323119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.323154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.323183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.878 [2024-05-15 18:18:31.323196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:38.878 [2024-05-15 18:18:31.323207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.323278] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.878 [2024-05-15 18:18:31.323296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.323313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.878 [2024-05-15 18:18:31.323326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:38.878 [2024-05-15 18:18:31.323338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.355762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.355804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.878 [2024-05-15 18:18:31.355837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.376 ms 00:26:38.878 [2024-05-15 18:18:31.355848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.355956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.878 [2024-05-15 18:18:31.355992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.878 [2024-05-15 18:18:31.356006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:38.878 [2024-05-15 18:18:31.356018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.878 [2024-05-15 18:18:31.364029] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 365.406 ms, result 0 00:27:16.341  Copying: 896/1048576 [kB] (896 kBps) Copying: 3832/1048576 [kB] (2936 kBps) Copying: 18/1024 [MB] (15 MBps) Copying: 46/1024 [MB] (27 MBps) Copying: 71/1024 [MB] (25 MBps) Copying: 98/1024 [MB] (26 MBps) Copying: 126/1024 [MB] (28 MBps) Copying: 154/1024 [MB] (27 MBps) Copying: 182/1024 [MB] (28 MBps) Copying: 211/1024 [MB] (29 MBps) Copying: 239/1024 [MB] (28 MBps) Copying: 268/1024 [MB] (28 MBps) Copying: 297/1024 [MB] (29 MBps) Copying: 325/1024 [MB] (28 MBps) Copying: 353/1024 [MB] (28 MBps) Copying: 382/1024 [MB] (28 MBps) Copying: 412/1024 [MB] (29 MBps) Copying: 439/1024 [MB] (27 MBps) Copying: 470/1024 [MB] (30 MBps) Copying: 500/1024 [MB] (30 MBps) Copying: 531/1024 [MB] (30 MBps) Copying: 562/1024 [MB] (30 MBps) Copying: 592/1024 [MB] (30 MBps) Copying: 622/1024 [MB] (30 MBps) Copying: 654/1024 [MB] (31 MBps) Copying: 684/1024 [MB] (29 MBps) Copying: 714/1024 [MB] (30 MBps) Copying: 746/1024 [MB] (31 MBps) Copying: 778/1024 [MB] (32 MBps) Copying: 808/1024 [MB] (30 MBps) Copying: 837/1024 [MB] (28 MBps) Copying: 868/1024 [MB] (31 MBps) Copying: 900/1024 [MB] (31 MBps) Copying: 931/1024 [MB] (31 MBps) Copying: 963/1024 [MB] (31 MBps) Copying: 991/1024 [MB] (27 MBps) Copying: 1018/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-05-15 18:19:08.766175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.341 [2024-05-15 18:19:08.766252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:16.341 [2024-05-15 18:19:08.766285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:16.341 [2024-05-15 18:19:08.766323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.341 [2024-05-15 18:19:08.766359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:16.341 [2024-05-15 18:19:08.770023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.341 [2024-05-15 18:19:08.770059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:16.341 [2024-05-15 18:19:08.770076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:27:16.341 [2024-05-15 18:19:08.770088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.341 [2024-05-15 18:19:08.770605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.341 [2024-05-15 18:19:08.770633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:16.342 [2024-05-15 18:19:08.770648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:27:16.342 [2024-05-15 18:19:08.770667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.342 [2024-05-15 18:19:08.785661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.342 [2024-05-15 18:19:08.785734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:16.342 [2024-05-15 18:19:08.785755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.957 ms 00:27:16.342 [2024-05-15 18:19:08.785768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.342 [2024-05-15 18:19:08.792873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.342 [2024-05-15 18:19:08.792913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:27:16.342 [2024-05-15 18:19:08.792955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.045 ms 00:27:16.342 [2024-05-15 18:19:08.792978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.342 [2024-05-15 18:19:08.826622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.342 [2024-05-15 18:19:08.826680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:16.342 [2024-05-15 18:19:08.826698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.571 ms 00:27:16.342 [2024-05-15 18:19:08.826711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.602 [2024-05-15 18:19:08.847859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.602 [2024-05-15 18:19:08.847902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:16.602 [2024-05-15 18:19:08.847930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.099 ms 00:27:16.602 [2024-05-15 18:19:08.847942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.602 [2024-05-15 18:19:08.851499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.602 [2024-05-15 18:19:08.851543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:16.602 [2024-05-15 18:19:08.851565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.484 ms 00:27:16.602 [2024-05-15 18:19:08.851578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.602 [2024-05-15 18:19:08.886208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.602 [2024-05-15 18:19:08.886327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:16.602 [2024-05-15 18:19:08.886363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.604 ms 00:27:16.602 [2024-05-15 18:19:08.886376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.602 [2024-05-15 18:19:08.918953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.602 [2024-05-15 18:19:08.918999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:16.602 [2024-05-15 18:19:08.919019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.523 ms 00:27:16.602 [2024-05-15 18:19:08.919048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.602 [2024-05-15 18:19:08.949881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.602 [2024-05-15 18:19:08.949943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:16.602 [2024-05-15 18:19:08.949963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.762 ms 00:27:16.602 [2024-05-15 18:19:08.949975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.602 [2024-05-15 18:19:08.981341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.602 [2024-05-15 18:19:08.981428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:16.602 [2024-05-15 18:19:08.981465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.246 ms 00:27:16.602 [2024-05-15 18:19:08.981478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.602 [2024-05-15 18:19:08.981534] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:16.602 [2024-05-15 18:19:08.981560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:16.602 [2024-05-15 18:19:08.981577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:27:16.602 [2024-05-15 18:19:08.981592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.981989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.982001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.982013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:16.602 [2024-05-15 18:19:08.982025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:16.603 [2024-05-15 18:19:08.982881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:16.603 [2024-05-15 18:19:08.982894] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: edaff169-3ef7-4aa7-ab5f-0876c2bbcd36 00:27:16.603 [2024-05-15 18:19:08.982907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:27:16.603 [2024-05-15 18:19:08.982920] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137408 00:27:16.603 [2024-05-15 18:19:08.982931] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135424 00:27:16.603 [2024-05-15 18:19:08.982944] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:27:16.603 [2024-05-15 18:19:08.982965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:16.603 [2024-05-15 18:19:08.982978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:16.603 [2024-05-15 18:19:08.982990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:16.603 [2024-05-15 18:19:08.983000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:16.603 [2024-05-15 18:19:08.983025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:16.603 [2024-05-15 18:19:08.983037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.603 [2024-05-15 18:19:08.983050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:16.603 [2024-05-15 18:19:08.983063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:27:16.603 [2024-05-15 18:19:08.983075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.603 [2024-05-15 18:19:09.000172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.603 [2024-05-15 18:19:09.000224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:16.603 [2024-05-15 18:19:09.000250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.045 ms 00:27:16.603 [2024-05-15 18:19:09.000262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.603 [2024-05-15 18:19:09.000612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.603 [2024-05-15 18:19:09.000639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:16.603 [2024-05-15 18:19:09.000654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:27:16.603 [2024-05-15 18:19:09.000667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.603 [2024-05-15 18:19:09.045885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.603 [2024-05-15 18:19:09.045948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:16.603 [2024-05-15 18:19:09.045982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.603 [2024-05-15 18:19:09.045994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.603 [2024-05-15 18:19:09.046062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.603 [2024-05-15 18:19:09.046078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:16.603 [2024-05-15 18:19:09.046104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.603 [2024-05-15 18:19:09.046115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.603 [2024-05-15 18:19:09.046207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.603 [2024-05-15 18:19:09.046226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:16.603 [2024-05-15 18:19:09.046245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.604 [2024-05-15 18:19:09.046256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.604 [2024-05-15 18:19:09.046277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.604 [2024-05-15 18:19:09.046290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:16.604 [2024-05-15 18:19:09.046300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.604 [2024-05-15 18:19:09.046358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.153662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.153749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.863 [2024-05-15 18:19:09.153790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.153803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.196048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.196119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.863 [2024-05-15 18:19:09.196140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.196154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.196229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.196247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.863 [2024-05-15 18:19:09.196260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.196285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.196408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.196436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.863 [2024-05-15 18:19:09.196448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.196460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.196590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.196610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.863 [2024-05-15 18:19:09.196623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.196642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.196704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.196723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:16.863 [2024-05-15 18:19:09.196736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.196748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.196797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.196812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.863 [2024-05-15 18:19:09.196825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.196837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.196900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.863 [2024-05-15 18:19:09.196931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.863 [2024-05-15 18:19:09.196944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.863 [2024-05-15 18:19:09.196956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.863 [2024-05-15 18:19:09.197117] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 430.902 ms, result 0 00:27:18.239 00:27:18.239 00:27:18.239 18:19:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:20.142 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:20.142 18:19:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:20.401 [2024-05-15 18:19:12.765776] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:27:20.401 [2024-05-15 18:19:12.765955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83156 ] 00:27:20.659 [2024-05-15 18:19:12.949538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.917 [2024-05-15 18:19:13.238060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.229 [2024-05-15 18:19:13.594359] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.229 [2024-05-15 18:19:13.594447] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.490 [2024-05-15 18:19:13.750791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.750860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:21.491 [2024-05-15 18:19:13.750883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:21.491 [2024-05-15 18:19:13.750900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.750975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.750996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:21.491 [2024-05-15 18:19:13.751010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:21.491 [2024-05-15 18:19:13.751021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.751054] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:21.491 [2024-05-15 18:19:13.751971] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:21.491 [2024-05-15 18:19:13.752007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.752021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:21.491 [2024-05-15 18:19:13.752044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:27:21.491 [2024-05-15 18:19:13.752055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.754072] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:21.491 [2024-05-15 18:19:13.770412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.770458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:21.491 [2024-05-15 18:19:13.770477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.341 ms 00:27:21.491 [2024-05-15 18:19:13.770490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.770560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.770580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:21.491 [2024-05-15 18:19:13.770593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:21.491 [2024-05-15 18:19:13.770605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.779364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.779430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:21.491 [2024-05-15 18:19:13.779448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.666 ms 00:27:21.491 [2024-05-15 18:19:13.779459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.779573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.779592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:21.491 [2024-05-15 18:19:13.779634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:21.491 [2024-05-15 18:19:13.779644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.779703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.779719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:21.491 [2024-05-15 18:19:13.779731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:21.491 [2024-05-15 18:19:13.779741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.779774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:21.491 [2024-05-15 18:19:13.784872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.784907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:21.491 [2024-05-15 18:19:13.784938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.105 ms 00:27:21.491 [2024-05-15 18:19:13.784949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.784985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.784999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:21.491 [2024-05-15 18:19:13.785011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:21.491 [2024-05-15 18:19:13.785021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.785089] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:21.491 [2024-05-15 18:19:13.785131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:27:21.491 [2024-05-15 18:19:13.785174] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:21.491 [2024-05-15 18:19:13.785195] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:27:21.491 [2024-05-15 18:19:13.785269] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:27:21.491 [2024-05-15 18:19:13.785283] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:21.491 [2024-05-15 18:19:13.785349] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:27:21.491 [2024-05-15 18:19:13.785373] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:21.491 [2024-05-15 18:19:13.785387] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:21.491 [2024-05-15 18:19:13.785400] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:21.491 [2024-05-15 18:19:13.785411] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:21.491 [2024-05-15 18:19:13.785422] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:27:21.491 [2024-05-15 18:19:13.785433] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:27:21.491 [2024-05-15 18:19:13.785446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.785458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:21.491 [2024-05-15 18:19:13.785477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:27:21.491 [2024-05-15 18:19:13.785495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.785577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.491 [2024-05-15 18:19:13.785596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:21.491 [2024-05-15 18:19:13.785609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:21.491 [2024-05-15 18:19:13.785620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.491 [2024-05-15 18:19:13.785707] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:21.491 [2024-05-15 18:19:13.785724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:21.491 [2024-05-15 18:19:13.785741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.491 [2024-05-15 18:19:13.785753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.491 [2024-05-15 18:19:13.785765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:21.491 [2024-05-15 18:19:13.785775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:21.491 [2024-05-15 18:19:13.785785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:21.492 [2024-05-15 18:19:13.785796] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:21.492 [2024-05-15 18:19:13.785807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:21.492 [2024-05-15 18:19:13.785817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.492 [2024-05-15 18:19:13.785827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:21.492 [2024-05-15 18:19:13.785838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:21.492 [2024-05-15 18:19:13.785848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.492 [2024-05-15 18:19:13.785859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:21.492 [2024-05-15 18:19:13.785870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:27:21.492 [2024-05-15 18:19:13.785893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.492 [2024-05-15 18:19:13.785904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:21.492 [2024-05-15 18:19:13.785915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:27:21.492 [2024-05-15 18:19:13.785925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.492 [2024-05-15 18:19:13.785935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:27:21.492 [2024-05-15 18:19:13.785946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:27:21.492 [2024-05-15 18:19:13.785957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:27:21.492 [2024-05-15 18:19:13.785968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:21.492 [2024-05-15 18:19:13.785979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:21.492 [2024-05-15 18:19:13.785989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:21.492 [2024-05-15 18:19:13.786009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:21.492 [2024-05-15 18:19:13.786020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:27:21.492 [2024-05-15 18:19:13.786030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:21.492 [2024-05-15 18:19:13.786041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:21.492 [2024-05-15 18:19:13.786051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:21.492 [2024-05-15 18:19:13.786061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:21.492 [2024-05-15 18:19:13.786071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:21.492 [2024-05-15 18:19:13.786081] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:27:21.492 [2024-05-15 18:19:13.786096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:21.492 [2024-05-15 18:19:13.786111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:21.492 [2024-05-15 18:19:13.786122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:21.492 [2024-05-15 18:19:13.786132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.492 [2024-05-15 18:19:13.786142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:21.492 [2024-05-15 18:19:13.786153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:27:21.492 [2024-05-15 18:19:13.786163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.492 [2024-05-15 18:19:13.786173] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:21.492 [2024-05-15 18:19:13.786185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:21.492 [2024-05-15 18:19:13.786202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.492 [2024-05-15 18:19:13.786213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.492 [2024-05-15 18:19:13.786226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:21.492 [2024-05-15 18:19:13.786239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:21.492 [2024-05-15 18:19:13.786250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:21.492 [2024-05-15 18:19:13.786261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:21.492 [2024-05-15 18:19:13.786271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:21.492 [2024-05-15 18:19:13.786281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:21.492 [2024-05-15 18:19:13.786307] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:21.492 [2024-05-15 18:19:13.786325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.492 [2024-05-15 18:19:13.786338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:21.492 [2024-05-15 18:19:13.786350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:27:21.492 [2024-05-15 18:19:13.786361] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:27:21.492 [2024-05-15 18:19:13.786373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:27:21.492 [2024-05-15 18:19:13.786385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:27:21.492 [2024-05-15 18:19:13.786396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:27:21.492 [2024-05-15 18:19:13.786407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:27:21.492 [2024-05-15 18:19:13.786418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:27:21.492 [2024-05-15 18:19:13.786430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:27:21.492 [2024-05-15 18:19:13.786441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:27:21.492 [2024-05-15 18:19:13.786451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:27:21.492 [2024-05-15 18:19:13.786463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:27:21.492 [2024-05-15 18:19:13.786475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:27:21.492 [2024-05-15 18:19:13.786486] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:21.492 [2024-05-15 18:19:13.786499] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.492 [2024-05-15 18:19:13.786511] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:21.492 [2024-05-15 18:19:13.786523] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:21.492 [2024-05-15 18:19:13.786535] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:21.492 [2024-05-15 18:19:13.786546] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:21.492 [2024-05-15 18:19:13.786559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.786581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:21.493 [2024-05-15 18:19:13.786593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:27:21.493 [2024-05-15 18:19:13.786605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.808808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.808860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:21.493 [2024-05-15 18:19:13.808878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.139 ms 00:27:21.493 [2024-05-15 18:19:13.808890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.809004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.809019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:21.493 [2024-05-15 18:19:13.809031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:21.493 [2024-05-15 18:19:13.809041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.862660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.862740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:21.493 [2024-05-15 18:19:13.862767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.546 ms 00:27:21.493 [2024-05-15 18:19:13.862779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.862852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.862868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:21.493 [2024-05-15 18:19:13.862881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.493 [2024-05-15 18:19:13.862893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.863550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.863570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:21.493 [2024-05-15 18:19:13.863584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:27:21.493 [2024-05-15 18:19:13.863602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.863814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.863833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:21.493 [2024-05-15 18:19:13.863846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:27:21.493 [2024-05-15 18:19:13.863858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.883648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.883701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:21.493 [2024-05-15 18:19:13.883720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.760 ms 00:27:21.493 [2024-05-15 18:19:13.883733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.900908] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:21.493 [2024-05-15 18:19:13.900976] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:21.493 [2024-05-15 18:19:13.900996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.901010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:21.493 [2024-05-15 18:19:13.901024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.112 ms 00:27:21.493 [2024-05-15 18:19:13.901036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.930394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.930442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:21.493 [2024-05-15 18:19:13.930459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.275 ms 00:27:21.493 [2024-05-15 18:19:13.930471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.946741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.946779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:21.493 [2024-05-15 18:19:13.946795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.217 ms 00:27:21.493 [2024-05-15 18:19:13.946821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.961402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.961461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:21.493 [2024-05-15 18:19:13.961478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.539 ms 00:27:21.493 [2024-05-15 18:19:13.961490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.493 [2024-05-15 18:19:13.961966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.493 [2024-05-15 18:19:13.961995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:21.493 [2024-05-15 18:19:13.962011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:27:21.493 [2024-05-15 18:19:13.962023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.752 [2024-05-15 18:19:14.053120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.752 [2024-05-15 18:19:14.053186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:21.752 [2024-05-15 18:19:14.053207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.071 ms 00:27:21.752 [2024-05-15 18:19:14.053219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.752 [2024-05-15 18:19:14.065994] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:21.752 [2024-05-15 18:19:14.070100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.070157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:21.753 [2024-05-15 18:19:14.070191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.809 ms 00:27:21.753 [2024-05-15 18:19:14.070208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.070314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.070355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:21.753 [2024-05-15 18:19:14.070382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:21.753 [2024-05-15 18:19:14.070394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.071459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.071496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:21.753 [2024-05-15 18:19:14.071511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:27:21.753 [2024-05-15 18:19:14.071523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.073650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.073689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:27:21.753 [2024-05-15 18:19:14.073735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:27:21.753 [2024-05-15 18:19:14.073747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.073798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.073813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:21.753 [2024-05-15 18:19:14.073825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:21.753 [2024-05-15 18:19:14.073837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.073880] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:21.753 [2024-05-15 18:19:14.073896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.073912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:21.753 [2024-05-15 18:19:14.073924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:21.753 [2024-05-15 18:19:14.073935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.106033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.106081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:21.753 [2024-05-15 18:19:14.106117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.072 ms 00:27:21.753 [2024-05-15 18:19:14.106129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.106218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.753 [2024-05-15 18:19:14.106267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:21.753 [2024-05-15 18:19:14.106280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:21.753 [2024-05-15 18:19:14.106290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.753 [2024-05-15 18:19:14.107631] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.310 ms, result 0 00:28:04.087  Copying: 26/1024 [MB] (26 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (24 MBps) Copying: 101/1024 [MB] (23 MBps) Copying: 125/1024 [MB] (24 MBps) Copying: 145/1024 [MB] (19 MBps) Copying: 170/1024 [MB] (24 MBps) Copying: 194/1024 [MB] (24 MBps) Copying: 215/1024 [MB] (21 MBps) Copying: 238/1024 [MB] (23 MBps) Copying: 263/1024 [MB] (25 MBps) Copying: 287/1024 [MB] (23 MBps) Copying: 310/1024 [MB] (23 MBps) Copying: 333/1024 [MB] (22 MBps) Copying: 357/1024 [MB] (23 MBps) Copying: 382/1024 [MB] (24 MBps) Copying: 407/1024 [MB] (25 MBps) Copying: 431/1024 [MB] (24 MBps) Copying: 456/1024 [MB] (24 MBps) Copying: 481/1024 [MB] (25 MBps) Copying: 505/1024 [MB] (24 MBps) Copying: 530/1024 [MB] (25 MBps) Copying: 554/1024 [MB] (24 MBps) Copying: 577/1024 [MB] (22 MBps) Copying: 601/1024 [MB] (23 MBps) Copying: 625/1024 [MB] (23 MBps) Copying: 649/1024 [MB] (24 MBps) Copying: 673/1024 [MB] (23 MBps) Copying: 697/1024 [MB] (24 MBps) Copying: 721/1024 [MB] (23 MBps) Copying: 744/1024 [MB] (23 MBps) Copying: 768/1024 [MB] (23 MBps) Copying: 791/1024 [MB] (23 MBps) Copying: 817/1024 [MB] (25 MBps) Copying: 844/1024 [MB] (26 MBps) Copying: 869/1024 [MB] (25 MBps) Copying: 896/1024 [MB] (26 MBps) Copying: 921/1024 [MB] (24 MBps) Copying: 946/1024 [MB] (24 MBps) Copying: 971/1024 [MB] (25 MBps) Copying: 997/1024 [MB] (25 MBps) Copying: 1022/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-05-15 18:19:56.390239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.390329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:04.087 [2024-05-15 18:19:56.390353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:04.087 [2024-05-15 18:19:56.390366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.390398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:04.087 [2024-05-15 18:19:56.394288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.394324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:04.087 [2024-05-15 18:19:56.394339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.867 ms 00:28:04.087 [2024-05-15 18:19:56.394358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.394613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.394632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:04.087 [2024-05-15 18:19:56.394645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:28:04.087 [2024-05-15 18:19:56.394657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.398061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.398084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:04.087 [2024-05-15 18:19:56.398098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:28:04.087 [2024-05-15 18:19:56.398109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.404655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.404685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:28:04.087 [2024-05-15 18:19:56.404698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.524 ms 00:28:04.087 [2024-05-15 18:19:56.404710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.438073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.438173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:04.087 [2024-05-15 18:19:56.438192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.269 ms 00:28:04.087 [2024-05-15 18:19:56.438218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.456657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.456706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:04.087 [2024-05-15 18:19:56.456725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.392 ms 00:28:04.087 [2024-05-15 18:19:56.456737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.460210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.460245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:04.087 [2024-05-15 18:19:56.460269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.422 ms 00:28:04.087 [2024-05-15 18:19:56.460281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.491841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.491881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:04.087 [2024-05-15 18:19:56.491898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.520 ms 00:28:04.087 [2024-05-15 18:19:56.491910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.523288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.523355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:04.087 [2024-05-15 18:19:56.523390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.325 ms 00:28:04.087 [2024-05-15 18:19:56.523402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.554574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.554613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:04.087 [2024-05-15 18:19:56.554630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.127 ms 00:28:04.087 [2024-05-15 18:19:56.554641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.585474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-05-15 18:19:56.585517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:04.087 [2024-05-15 18:19:56.585534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.735 ms 00:28:04.087 [2024-05-15 18:19:56.585545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-05-15 18:19:56.585591] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:04.087 [2024-05-15 18:19:56.585614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:04.087 [2024-05-15 18:19:56.585630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:28:04.087 [2024-05-15 18:19:56.585643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:04.087 [2024-05-15 18:19:56.585655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:04.087 [2024-05-15 18:19:56.585667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:04.087 [2024-05-15 18:19:56.585680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:04.087 [2024-05-15 18:19:56.585691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.585982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:04.088 [2024-05-15 18:19:56.586918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:04.089 [2024-05-15 18:19:56.586929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:04.089 [2024-05-15 18:19:56.586954] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:04.089 [2024-05-15 18:19:56.586972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: edaff169-3ef7-4aa7-ab5f-0876c2bbcd36 00:28:04.089 [2024-05-15 18:19:56.586999] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:28:04.089 [2024-05-15 18:19:56.587012] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:04.089 [2024-05-15 18:19:56.587023] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:04.089 [2024-05-15 18:19:56.587035] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:04.089 [2024-05-15 18:19:56.587049] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:04.089 [2024-05-15 18:19:56.587069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:04.089 [2024-05-15 18:19:56.587089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:04.089 [2024-05-15 18:19:56.587122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:04.089 [2024-05-15 18:19:56.587133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:04.089 [2024-05-15 18:19:56.587145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.089 [2024-05-15 18:19:56.587178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:04.089 [2024-05-15 18:19:56.587191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:28:04.089 [2024-05-15 18:19:56.587203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.604562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.348 [2024-05-15 18:19:56.604596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:04.348 [2024-05-15 18:19:56.604612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:28:04.348 [2024-05-15 18:19:56.604624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.604880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.348 [2024-05-15 18:19:56.604897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:04.348 [2024-05-15 18:19:56.604909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:28:04.348 [2024-05-15 18:19:56.604929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.653187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.653259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.348 [2024-05-15 18:19:56.653293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.653305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.653419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.653438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.348 [2024-05-15 18:19:56.653451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.653471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.653561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.653580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.348 [2024-05-15 18:19:56.653593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.653604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.653627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.653640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.348 [2024-05-15 18:19:56.653652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.653663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.760632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.760688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.348 [2024-05-15 18:19:56.760707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.760720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.802173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.802245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:04.348 [2024-05-15 18:19:56.802275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.802299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.802401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.802420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:04.348 [2024-05-15 18:19:56.802433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.802444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.802499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.802519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:04.348 [2024-05-15 18:19:56.802532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.802543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.802694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.802721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:04.348 [2024-05-15 18:19:56.802743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.802763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.802836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.802862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:04.348 [2024-05-15 18:19:56.802877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.802889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.802942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.802975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:04.348 [2024-05-15 18:19:56.802989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.803001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.803058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.348 [2024-05-15 18:19:56.803075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:04.348 [2024-05-15 18:19:56.803087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.348 [2024-05-15 18:19:56.803099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.348 [2024-05-15 18:19:56.803261] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.982 ms, result 0 00:28:05.726 00:28:05.726 00:28:05.726 18:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:08.262 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81262 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 81262 ']' 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 81262 00:28:08.262 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (81262) - No such process 00:28:08.262 Process with pid 81262 is not found 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 81262 is not found' 00:28:08.262 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:08.521 Remove shared memory files 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:08.521 ************************************ 00:28:08.521 END TEST ftl_dirty_shutdown 00:28:08.521 ************************************ 00:28:08.521 00:28:08.521 real 3m54.546s 00:28:08.521 user 4m28.755s 00:28:08.521 sys 0m38.586s 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:08.521 18:20:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:08.521 18:20:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:08.521 18:20:00 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:28:08.521 18:20:00 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:08.521 18:20:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:08.521 ************************************ 00:28:08.521 START TEST ftl_upgrade_shutdown 00:28:08.521 ************************************ 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:08.521 * Looking for test storage... 00:28:08.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:08.521 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:08.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83699 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83699 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 83699 ']' 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:08.522 18:20:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:08.781 [2024-05-15 18:20:01.115159] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:28:08.781 [2024-05-15 18:20:01.115624] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83699 ] 00:28:09.040 [2024-05-15 18:20:01.293168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.298 [2024-05-15 18:20:01.581148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:10.234 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:28:10.492 18:20:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:28:10.750 { 00:28:10.750 "name": "basen1", 00:28:10.750 "aliases": [ 00:28:10.750 "6717697b-edef-4fee-a1b6-d1ca3666e3ce" 00:28:10.750 ], 00:28:10.750 "product_name": "NVMe disk", 00:28:10.750 "block_size": 4096, 00:28:10.750 "num_blocks": 1310720, 00:28:10.750 "uuid": "6717697b-edef-4fee-a1b6-d1ca3666e3ce", 00:28:10.750 "assigned_rate_limits": { 00:28:10.750 "rw_ios_per_sec": 0, 00:28:10.750 "rw_mbytes_per_sec": 0, 00:28:10.750 "r_mbytes_per_sec": 0, 00:28:10.750 "w_mbytes_per_sec": 0 00:28:10.750 }, 00:28:10.750 "claimed": true, 00:28:10.750 "claim_type": "read_many_write_one", 00:28:10.750 "zoned": false, 00:28:10.750 "supported_io_types": { 00:28:10.750 "read": true, 00:28:10.750 "write": true, 00:28:10.750 "unmap": true, 00:28:10.750 "write_zeroes": true, 00:28:10.750 "flush": true, 00:28:10.750 "reset": true, 00:28:10.750 "compare": true, 00:28:10.750 "compare_and_write": false, 00:28:10.750 "abort": true, 00:28:10.750 "nvme_admin": true, 00:28:10.750 "nvme_io": true 00:28:10.750 }, 00:28:10.750 "driver_specific": { 00:28:10.750 "nvme": [ 00:28:10.750 { 00:28:10.750 "pci_address": "0000:00:11.0", 00:28:10.750 "trid": { 00:28:10.750 "trtype": "PCIe", 00:28:10.750 "traddr": "0000:00:11.0" 00:28:10.750 }, 00:28:10.750 "ctrlr_data": { 00:28:10.750 "cntlid": 0, 00:28:10.750 "vendor_id": "0x1b36", 00:28:10.750 "model_number": "QEMU NVMe Ctrl", 00:28:10.750 "serial_number": "12341", 00:28:10.750 "firmware_revision": "8.0.0", 00:28:10.750 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:10.750 "oacs": { 00:28:10.750 "security": 0, 00:28:10.750 "format": 1, 00:28:10.750 "firmware": 0, 00:28:10.750 "ns_manage": 1 00:28:10.750 }, 00:28:10.750 "multi_ctrlr": false, 00:28:10.750 "ana_reporting": false 00:28:10.750 }, 00:28:10.750 "vs": { 00:28:10.750 "nvme_version": "1.4" 00:28:10.750 }, 00:28:10.750 "ns_data": { 00:28:10.750 "id": 1, 00:28:10.750 "can_share": false 00:28:10.750 } 00:28:10.750 } 00:28:10.750 ], 00:28:10.750 "mp_policy": "active_passive" 00:28:10.750 } 00:28:10.750 } 00:28:10.750 ]' 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:10.750 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:11.008 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0254e4f0-d091-4fe6-85cc-537b69a4997e 00:28:11.008 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:11.009 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0254e4f0-d091-4fe6-85cc-537b69a4997e 00:28:11.266 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:11.524 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=195f8b3a-0a91-4fb5-9875-b385a85326cb 00:28:11.525 18:20:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 195f8b3a-0a91-4fb5-9875-b385a85326cb 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=c852ea05-c005-4dcb-9641-9c0c024a693b 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z c852ea05-c005-4dcb-9641-9c0c024a693b ]] 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 c852ea05-c005-4dcb-9641-9c0c024a693b 5120 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=c852ea05-c005-4dcb-9641-9c0c024a693b 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size c852ea05-c005-4dcb-9641-9c0c024a693b 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=c852ea05-c005-4dcb-9641-9c0c024a693b 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:28:11.783 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c852ea05-c005-4dcb-9641-9c0c024a693b 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:28:12.349 { 00:28:12.349 "name": "c852ea05-c005-4dcb-9641-9c0c024a693b", 00:28:12.349 "aliases": [ 00:28:12.349 "lvs/basen1p0" 00:28:12.349 ], 00:28:12.349 "product_name": "Logical Volume", 00:28:12.349 "block_size": 4096, 00:28:12.349 "num_blocks": 5242880, 00:28:12.349 "uuid": "c852ea05-c005-4dcb-9641-9c0c024a693b", 00:28:12.349 "assigned_rate_limits": { 00:28:12.349 "rw_ios_per_sec": 0, 00:28:12.349 "rw_mbytes_per_sec": 0, 00:28:12.349 "r_mbytes_per_sec": 0, 00:28:12.349 "w_mbytes_per_sec": 0 00:28:12.349 }, 00:28:12.349 "claimed": false, 00:28:12.349 "zoned": false, 00:28:12.349 "supported_io_types": { 00:28:12.349 "read": true, 00:28:12.349 "write": true, 00:28:12.349 "unmap": true, 00:28:12.349 "write_zeroes": true, 00:28:12.349 "flush": false, 00:28:12.349 "reset": true, 00:28:12.349 "compare": false, 00:28:12.349 "compare_and_write": false, 00:28:12.349 "abort": false, 00:28:12.349 "nvme_admin": false, 00:28:12.349 "nvme_io": false 00:28:12.349 }, 00:28:12.349 "driver_specific": { 00:28:12.349 "lvol": { 00:28:12.349 "lvol_store_uuid": "195f8b3a-0a91-4fb5-9875-b385a85326cb", 00:28:12.349 "base_bdev": "basen1", 00:28:12.349 "thin_provision": true, 00:28:12.349 "num_allocated_clusters": 0, 00:28:12.349 "snapshot": false, 00:28:12.349 "clone": false, 00:28:12.349 "esnap_clone": false 00:28:12.349 } 00:28:12.349 } 00:28:12.349 } 00:28:12.349 ]' 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:12.349 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:12.606 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:12.606 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:12.606 18:20:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:12.863 18:20:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:12.863 18:20:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:12.863 18:20:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d c852ea05-c005-4dcb-9641-9c0c024a693b -c cachen1p0 --l2p_dram_limit 2 00:28:13.150 [2024-05-15 18:20:05.414127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.414195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:13.150 [2024-05-15 18:20:05.414236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:13.150 [2024-05-15 18:20:05.414249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.414366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.414385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:13.150 [2024-05-15 18:20:05.414404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:28:13.150 [2024-05-15 18:20:05.414416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.414488] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:13.150 [2024-05-15 18:20:05.415627] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:13.150 [2024-05-15 18:20:05.415667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.415681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:13.150 [2024-05-15 18:20:05.415703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.210 ms 00:28:13.150 [2024-05-15 18:20:05.415715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.415904] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 57dbfc00-3576-4c34-bd35-7230d366eca1 00:28:13.150 [2024-05-15 18:20:05.417978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.418034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:13.150 [2024-05-15 18:20:05.418053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:13.150 [2024-05-15 18:20:05.418066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.428627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.428698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:13.150 [2024-05-15 18:20:05.428718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.474 ms 00:28:13.150 [2024-05-15 18:20:05.428734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.428819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.428859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:13.150 [2024-05-15 18:20:05.428887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:13.150 [2024-05-15 18:20:05.428902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.429000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.429025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:13.150 [2024-05-15 18:20:05.429039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:13.150 [2024-05-15 18:20:05.429068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.429101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:13.150 [2024-05-15 18:20:05.434788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.434855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:13.150 [2024-05-15 18:20:05.434890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.693 ms 00:28:13.150 [2024-05-15 18:20:05.434902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.434978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.434994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:13.150 [2024-05-15 18:20:05.435008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:13.150 [2024-05-15 18:20:05.435019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.435068] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:13.150 [2024-05-15 18:20:05.435194] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:28:13.150 [2024-05-15 18:20:05.435214] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:13.150 [2024-05-15 18:20:05.435229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:28:13.150 [2024-05-15 18:20:05.435248] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:13.150 [2024-05-15 18:20:05.435261] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:13.150 [2024-05-15 18:20:05.435275] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:13.150 [2024-05-15 18:20:05.435287] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:13.150 [2024-05-15 18:20:05.435315] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:28:13.150 [2024-05-15 18:20:05.435325] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:28:13.150 [2024-05-15 18:20:05.435360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.435375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:13.150 [2024-05-15 18:20:05.435393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:28:13.150 [2024-05-15 18:20:05.435404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.435509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.435525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:13.150 [2024-05-15 18:20:05.435541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:28:13.150 [2024-05-15 18:20:05.435553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.435642] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:13.150 [2024-05-15 18:20:05.435671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:13.150 [2024-05-15 18:20:05.435691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:13.150 [2024-05-15 18:20:05.435706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.435721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:13.150 [2024-05-15 18:20:05.435732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.435745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:13.150 [2024-05-15 18:20:05.435756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:13.150 [2024-05-15 18:20:05.435770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:13.150 [2024-05-15 18:20:05.435781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.435794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:13.150 [2024-05-15 18:20:05.435805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:13.150 [2024-05-15 18:20:05.435820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.435831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:13.150 [2024-05-15 18:20:05.435845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:28:13.150 [2024-05-15 18:20:05.435856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.435869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:13.150 [2024-05-15 18:20:05.435880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:28:13.150 [2024-05-15 18:20:05.435895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.435906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:28:13.150 [2024-05-15 18:20:05.435920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:28:13.150 [2024-05-15 18:20:05.435934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:28:13.150 [2024-05-15 18:20:05.435948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:13.150 [2024-05-15 18:20:05.435959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:13.150 [2024-05-15 18:20:05.435986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:13.150 [2024-05-15 18:20:05.435999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:13.150 [2024-05-15 18:20:05.436012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:28:13.150 [2024-05-15 18:20:05.436023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:13.150 [2024-05-15 18:20:05.436037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:13.150 [2024-05-15 18:20:05.436047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:13.150 [2024-05-15 18:20:05.436061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:13.150 [2024-05-15 18:20:05.436071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:13.150 [2024-05-15 18:20:05.436084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:28:13.150 [2024-05-15 18:20:05.436095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:13.150 [2024-05-15 18:20:05.436120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:13.150 [2024-05-15 18:20:05.436131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:13.150 [2024-05-15 18:20:05.436144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.436156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:13.150 [2024-05-15 18:20:05.436171] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:28:13.150 [2024-05-15 18:20:05.436182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.436195] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:13.150 [2024-05-15 18:20:05.436207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:13.150 [2024-05-15 18:20:05.436221] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:13.150 [2024-05-15 18:20:05.436234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.150 [2024-05-15 18:20:05.436248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:13.150 [2024-05-15 18:20:05.436260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:13.150 [2024-05-15 18:20:05.436273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:13.150 [2024-05-15 18:20:05.436284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:13.150 [2024-05-15 18:20:05.436311] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:13.150 [2024-05-15 18:20:05.436325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:13.150 [2024-05-15 18:20:05.436343] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:13.150 [2024-05-15 18:20:05.436359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:13.150 [2024-05-15 18:20:05.436389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436417] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:28:13.150 [2024-05-15 18:20:05.436432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:28:13.150 [2024-05-15 18:20:05.436445] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:28:13.150 [2024-05-15 18:20:05.436460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:28:13.150 [2024-05-15 18:20:05.436472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436500] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436514] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:28:13.150 [2024-05-15 18:20:05.436543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:28:13.150 [2024-05-15 18:20:05.436555] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:13.150 [2024-05-15 18:20:05.436575] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436589] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:13.150 [2024-05-15 18:20:05.436603] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:13.150 [2024-05-15 18:20:05.436615] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:13.150 [2024-05-15 18:20:05.436630] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:13.150 [2024-05-15 18:20:05.436643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.436658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:13.150 [2024-05-15 18:20:05.436673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.048 ms 00:28:13.150 [2024-05-15 18:20:05.436688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.459330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.459424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:13.150 [2024-05-15 18:20:05.459443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.582 ms 00:28:13.150 [2024-05-15 18:20:05.459458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.459516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.459535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:13.150 [2024-05-15 18:20:05.459549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:13.150 [2024-05-15 18:20:05.459565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.505704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.505790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:13.150 [2024-05-15 18:20:05.505812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 46.057 ms 00:28:13.150 [2024-05-15 18:20:05.505828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.505891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.505911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:13.150 [2024-05-15 18:20:05.505925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:13.150 [2024-05-15 18:20:05.505940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.506653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.506684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:13.150 [2024-05-15 18:20:05.506701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:28:13.150 [2024-05-15 18:20:05.506715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.506774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.506792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:13.150 [2024-05-15 18:20:05.506805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:13.150 [2024-05-15 18:20:05.506822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.530045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.530107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:13.150 [2024-05-15 18:20:05.530127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.181 ms 00:28:13.150 [2024-05-15 18:20:05.530159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.545318] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:13.150 [2024-05-15 18:20:05.546825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.546858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:13.150 [2024-05-15 18:20:05.546879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.520 ms 00:28:13.150 [2024-05-15 18:20:05.546893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.583854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.150 [2024-05-15 18:20:05.583921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:13.150 [2024-05-15 18:20:05.583963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.913 ms 00:28:13.150 [2024-05-15 18:20:05.584006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.150 [2024-05-15 18:20:05.584076] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:28:13.150 [2024-05-15 18:20:05.584097] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:28:18.419 [2024-05-15 18:20:10.486678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.419 [2024-05-15 18:20:10.486767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:18.420 [2024-05-15 18:20:10.486801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4902.587 ms 00:28:18.420 [2024-05-15 18:20:10.486815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.486944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.486966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:18.420 [2024-05-15 18:20:10.486983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:28:18.420 [2024-05-15 18:20:10.486996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.518482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.518530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:18.420 [2024-05-15 18:20:10.518568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.413 ms 00:28:18.420 [2024-05-15 18:20:10.518581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.550027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.550071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:18.420 [2024-05-15 18:20:10.550093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.409 ms 00:28:18.420 [2024-05-15 18:20:10.550105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.550645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.550669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:18.420 [2024-05-15 18:20:10.550685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:28:18.420 [2024-05-15 18:20:10.550698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.641875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.641942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:18.420 [2024-05-15 18:20:10.641972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 91.105 ms 00:28:18.420 [2024-05-15 18:20:10.641986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.675771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.675826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:18.420 [2024-05-15 18:20:10.675856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 33.730 ms 00:28:18.420 [2024-05-15 18:20:10.675870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.678225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.678266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:28:18.420 [2024-05-15 18:20:10.678301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.319 ms 00:28:18.420 [2024-05-15 18:20:10.678323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.710934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.710980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:18.420 [2024-05-15 18:20:10.711002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 32.539 ms 00:28:18.420 [2024-05-15 18:20:10.711015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.711056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.711073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:18.420 [2024-05-15 18:20:10.711089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:18.420 [2024-05-15 18:20:10.711102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.711286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.420 [2024-05-15 18:20:10.711361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:18.420 [2024-05-15 18:20:10.711379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:28:18.420 [2024-05-15 18:20:10.711391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.420 [2024-05-15 18:20:10.712902] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5298.058 ms, result 0 00:28:18.420 { 00:28:18.420 "name": "ftl", 00:28:18.420 "uuid": "57dbfc00-3576-4c34-bd35-7230d366eca1" 00:28:18.420 } 00:28:18.420 18:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:18.713 [2024-05-15 18:20:11.003773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.713 18:20:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:18.991 18:20:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:19.250 [2024-05-15 18:20:11.608575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:19.250 18:20:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:19.508 [2024-05-15 18:20:11.947143] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:19.508 [2024-05-15 18:20:11.947716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:19.508 18:20:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:20.075 Fill FTL, iteration 1 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83845 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83845 /var/tmp/spdk.tgt.sock 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 83845 ']' 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:20.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:20.075 18:20:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:20.075 [2024-05-15 18:20:12.473121] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:28:20.075 [2024-05-15 18:20:12.473549] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83845 ] 00:28:20.334 [2024-05-15 18:20:12.640457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.595 [2024-05-15 18:20:12.918808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.530 18:20:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.530 18:20:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:28:21.530 18:20:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:21.788 ftln1 00:28:21.788 18:20:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:21.788 18:20:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83845 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 83845 ']' 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 83845 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83845 00:28:22.046 killing process with pid 83845 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83845' 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 83845 00:28:22.046 18:20:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 83845 00:28:24.575 18:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:24.575 18:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:24.575 [2024-05-15 18:20:16.705157] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:28:24.575 [2024-05-15 18:20:16.705348] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83898 ] 00:28:24.575 [2024-05-15 18:20:16.875831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.833 [2024-05-15 18:20:17.116652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.392  Copying: 201/1024 [MB] (201 MBps) Copying: 404/1024 [MB] (203 MBps) Copying: 619/1024 [MB] (215 MBps) Copying: 831/1024 [MB] (212 MBps) Copying: 1024/1024 [MB] (average 208 MBps) 00:28:31.392 00:28:31.392 Calculate MD5 checksum, iteration 1 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:31.392 18:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:31.392 [2024-05-15 18:20:23.755514] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:28:31.392 [2024-05-15 18:20:23.755683] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83969 ] 00:28:31.651 [2024-05-15 18:20:23.924670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.909 [2024-05-15 18:20:24.208389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.409  Copying: 512/1024 [MB] (512 MBps) Copying: 966/1024 [MB] (454 MBps) Copying: 1024/1024 [MB] (average 474 MBps) 00:28:35.409 00:28:35.409 18:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:35.409 18:20:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:38.005 Fill FTL, iteration 2 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2570fce8b68650057c5b2e32a0d2b7e4 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:38.005 18:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:38.005 [2024-05-15 18:20:30.045334] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:28:38.005 [2024-05-15 18:20:30.045513] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84036 ] 00:28:38.005 [2024-05-15 18:20:30.207268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.005 [2024-05-15 18:20:30.447547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.714  Copying: 210/1024 [MB] (210 MBps) Copying: 418/1024 [MB] (208 MBps) Copying: 632/1024 [MB] (214 MBps) Copying: 842/1024 [MB] (210 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:28:44.714 00:28:44.714 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:44.714 Calculate MD5 checksum, iteration 2 00:28:44.714 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:44.714 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:44.714 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:44.714 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:44.714 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:44.714 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:44.715 18:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:44.715 [2024-05-15 18:20:37.060888] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:28:44.715 [2024-05-15 18:20:37.061052] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84106 ] 00:28:44.977 [2024-05-15 18:20:37.228240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.244 [2024-05-15 18:20:37.497555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.278  Copying: 426/1024 [MB] (426 MBps) Copying: 900/1024 [MB] (474 MBps) Copying: 1024/1024 [MB] (average 442 MBps) 00:28:51.278 00:28:51.278 18:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:51.278 18:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.178 18:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:53.178 18:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c29bd0424937973884a41a45c683bd5f 00:28:53.178 18:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:53.178 18:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:53.178 18:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:53.178 [2024-05-15 18:20:45.551991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.178 [2024-05-15 18:20:45.552074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:53.178 [2024-05-15 18:20:45.552097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:53.178 [2024-05-15 18:20:45.552111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:53.178 [2024-05-15 18:20:45.552159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.178 [2024-05-15 18:20:45.552177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:53.178 [2024-05-15 18:20:45.552209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:53.178 [2024-05-15 18:20:45.552221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:53.178 [2024-05-15 18:20:45.552251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.178 [2024-05-15 18:20:45.552273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:53.178 [2024-05-15 18:20:45.552287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:53.178 [2024-05-15 18:20:45.552324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:53.178 [2024-05-15 18:20:45.552419] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.426 ms, result 0 00:28:53.178 true 00:28:53.178 18:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:53.436 { 00:28:53.436 "name": "ftl", 00:28:53.436 "properties": [ 00:28:53.436 { 00:28:53.436 "name": "superblock_version", 00:28:53.436 "value": 5, 00:28:53.436 "read-only": true 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "name": "base_device", 00:28:53.436 "bands": [ 00:28:53.436 { 00:28:53.436 "id": 0, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 1, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 2, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 3, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 4, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 5, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 6, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 7, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 8, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 9, 00:28:53.436 "state": "FREE", 00:28:53.436 "validity": 0.0 00:28:53.436 }, 00:28:53.436 { 00:28:53.436 "id": 10, 00:28:53.436 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 11, 00:28:53.437 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 12, 00:28:53.437 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 13, 00:28:53.437 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 14, 00:28:53.437 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 15, 00:28:53.437 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 16, 00:28:53.437 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 17, 00:28:53.437 "state": "FREE", 00:28:53.437 "validity": 0.0 00:28:53.437 } 00:28:53.437 ], 00:28:53.437 "read-only": true 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "name": "cache_device", 00:28:53.437 "type": "bdev", 00:28:53.437 "chunks": [ 00:28:53.437 { 00:28:53.437 "id": 0, 00:28:53.437 "state": "CLOSED", 00:28:53.437 "utilization": 1.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 1, 00:28:53.437 "state": "CLOSED", 00:28:53.437 "utilization": 1.0 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 2, 00:28:53.437 "state": "OPEN", 00:28:53.437 "utilization": 0.001953125 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "id": 3, 00:28:53.437 "state": "OPEN", 00:28:53.437 "utilization": 0.0 00:28:53.437 } 00:28:53.437 ], 00:28:53.437 "read-only": true 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "name": "verbose_mode", 00:28:53.437 "value": true, 00:28:53.437 "unit": "", 00:28:53.437 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:53.437 }, 00:28:53.437 { 00:28:53.437 "name": "prep_upgrade_on_shutdown", 00:28:53.437 "value": false, 00:28:53.437 "unit": "", 00:28:53.437 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:53.437 } 00:28:53.437 ] 00:28:53.437 } 00:28:53.437 18:20:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:53.695 [2024-05-15 18:20:46.164937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.695 [2024-05-15 18:20:46.165037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:53.695 [2024-05-15 18:20:46.165071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:53.695 [2024-05-15 18:20:46.165092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:53.695 [2024-05-15 18:20:46.165150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.695 [2024-05-15 18:20:46.165174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:53.695 [2024-05-15 18:20:46.165196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:53.695 [2024-05-15 18:20:46.165218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:53.695 [2024-05-15 18:20:46.165262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.695 [2024-05-15 18:20:46.165286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:53.695 [2024-05-15 18:20:46.165339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:53.695 [2024-05-15 18:20:46.165359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:53.695 [2024-05-15 18:20:46.165490] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.524 ms, result 0 00:28:53.695 true 00:28:53.953 18:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:53.953 18:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:53.953 18:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:54.210 18:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:54.210 18:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:54.210 18:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:54.469 [2024-05-15 18:20:46.768326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.469 [2024-05-15 18:20:46.768396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:54.469 [2024-05-15 18:20:46.768424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:54.469 [2024-05-15 18:20:46.768436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.469 [2024-05-15 18:20:46.768474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.469 [2024-05-15 18:20:46.768490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:54.469 [2024-05-15 18:20:46.768503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:54.469 [2024-05-15 18:20:46.768515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.469 [2024-05-15 18:20:46.768543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.469 [2024-05-15 18:20:46.768556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:54.469 [2024-05-15 18:20:46.768568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:54.469 [2024-05-15 18:20:46.768579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.469 [2024-05-15 18:20:46.768670] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.337 ms, result 0 00:28:54.469 true 00:28:54.469 18:20:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:54.727 { 00:28:54.727 "name": "ftl", 00:28:54.728 "properties": [ 00:28:54.728 { 00:28:54.728 "name": "superblock_version", 00:28:54.728 "value": 5, 00:28:54.728 "read-only": true 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "name": "base_device", 00:28:54.728 "bands": [ 00:28:54.728 { 00:28:54.728 "id": 0, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 1, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 2, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 3, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 4, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 5, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 6, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 7, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 8, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 9, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 10, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 11, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 12, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 13, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 14, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 15, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 16, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 17, 00:28:54.728 "state": "FREE", 00:28:54.728 "validity": 0.0 00:28:54.728 } 00:28:54.728 ], 00:28:54.728 "read-only": true 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "name": "cache_device", 00:28:54.728 "type": "bdev", 00:28:54.728 "chunks": [ 00:28:54.728 { 00:28:54.728 "id": 0, 00:28:54.728 "state": "CLOSED", 00:28:54.728 "utilization": 1.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 1, 00:28:54.728 "state": "CLOSED", 00:28:54.728 "utilization": 1.0 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 2, 00:28:54.728 "state": "OPEN", 00:28:54.728 "utilization": 0.001953125 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "id": 3, 00:28:54.728 "state": "OPEN", 00:28:54.728 "utilization": 0.0 00:28:54.728 } 00:28:54.728 ], 00:28:54.728 "read-only": true 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "name": "verbose_mode", 00:28:54.728 "value": true, 00:28:54.728 "unit": "", 00:28:54.728 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "name": "prep_upgrade_on_shutdown", 00:28:54.728 "value": true, 00:28:54.728 "unit": "", 00:28:54.728 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:54.728 } 00:28:54.728 ] 00:28:54.728 } 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83699 ]] 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83699 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 83699 ']' 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 83699 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83699 00:28:54.728 killing process with pid 83699 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83699' 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 83699 00:28:54.728 [2024-05-15 18:20:47.108562] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:54.728 18:20:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 83699 00:28:55.664 [2024-05-15 18:20:48.135553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:55.664 [2024-05-15 18:20:48.151799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.664 [2024-05-15 18:20:48.151870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:55.664 [2024-05-15 18:20:48.151900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:55.664 [2024-05-15 18:20:48.151921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.664 [2024-05-15 18:20:48.151956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:55.664 [2024-05-15 18:20:48.155642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.664 [2024-05-15 18:20:48.155677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:55.664 [2024-05-15 18:20:48.155696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.663 ms 00:28:55.664 [2024-05-15 18:20:48.155708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.778772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.778853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:05.687 [2024-05-15 18:20:56.778876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8623.024 ms 00:29:05.687 [2024-05-15 18:20:56.778890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.780170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.780212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:05.687 [2024-05-15 18:20:56.780228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.255 ms 00:29:05.687 [2024-05-15 18:20:56.780248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.781500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.781533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:29:05.687 [2024-05-15 18:20:56.781562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 00:29:05.687 [2024-05-15 18:20:56.781573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.795510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.795598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:05.687 [2024-05-15 18:20:56.795631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.848 ms 00:29:05.687 [2024-05-15 18:20:56.795642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.804208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.804260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:05.687 [2024-05-15 18:20:56.804277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.523 ms 00:29:05.687 [2024-05-15 18:20:56.804307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.804432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.804453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:05.687 [2024-05-15 18:20:56.804467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:29:05.687 [2024-05-15 18:20:56.804479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.817316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.817363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:05.687 [2024-05-15 18:20:56.817380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.813 ms 00:29:05.687 [2024-05-15 18:20:56.817392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.830092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.687 [2024-05-15 18:20:56.830146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:05.687 [2024-05-15 18:20:56.830193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.654 ms 00:29:05.687 [2024-05-15 18:20:56.830205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.687 [2024-05-15 18:20:56.842959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.688 [2024-05-15 18:20:56.843003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:05.688 [2024-05-15 18:20:56.843029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.705 ms 00:29:05.688 [2024-05-15 18:20:56.843041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.856113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.688 [2024-05-15 18:20:56.856153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:05.688 [2024-05-15 18:20:56.856169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.930 ms 00:29:05.688 [2024-05-15 18:20:56.856180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.856221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:05.688 [2024-05-15 18:20:56.856245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:05.688 [2024-05-15 18:20:56.856269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:05.688 [2024-05-15 18:20:56.856282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:05.688 [2024-05-15 18:20:56.856320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.688 [2024-05-15 18:20:56.856546] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:05.688 [2024-05-15 18:20:56.856558] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 57dbfc00-3576-4c34-bd35-7230d366eca1 00:29:05.688 [2024-05-15 18:20:56.856570] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:05.688 [2024-05-15 18:20:56.856582] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:05.688 [2024-05-15 18:20:56.856592] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:05.688 [2024-05-15 18:20:56.856607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:05.688 [2024-05-15 18:20:56.856617] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:05.688 [2024-05-15 18:20:56.856629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:05.688 [2024-05-15 18:20:56.856640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:05.688 [2024-05-15 18:20:56.856650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:05.688 [2024-05-15 18:20:56.856660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:05.688 [2024-05-15 18:20:56.856671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.688 [2024-05-15 18:20:56.856683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:05.688 [2024-05-15 18:20:56.856711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.452 ms 00:29:05.688 [2024-05-15 18:20:56.856731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.875542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.688 [2024-05-15 18:20:56.875614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:05.688 [2024-05-15 18:20:56.875662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.766 ms 00:29:05.688 [2024-05-15 18:20:56.875675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.875947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.688 [2024-05-15 18:20:56.875965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:05.688 [2024-05-15 18:20:56.875997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:29:05.688 [2024-05-15 18:20:56.876010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.939173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:56.939257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:05.688 [2024-05-15 18:20:56.939301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:56.939330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.939394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:56.939411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:05.688 [2024-05-15 18:20:56.939432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:56.939444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.939560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:56.939580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:05.688 [2024-05-15 18:20:56.939593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:56.939605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:56.939631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:56.939646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:05.688 [2024-05-15 18:20:56.939668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:56.939686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.054152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.054248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:05.688 [2024-05-15 18:20:57.054283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.054296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.098115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.098166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:05.688 [2024-05-15 18:20:57.098184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.098205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.098340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.098361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:05.688 [2024-05-15 18:20:57.098375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.098387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.098452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.098469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:05.688 [2024-05-15 18:20:57.098483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.098495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.098653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.098673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:05.688 [2024-05-15 18:20:57.098686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.098699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.098749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.098767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:05.688 [2024-05-15 18:20:57.098781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.098798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.098851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.098868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:05.688 [2024-05-15 18:20:57.098881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.098893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.098949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:05.688 [2024-05-15 18:20:57.098966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:05.688 [2024-05-15 18:20:57.098978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:05.688 [2024-05-15 18:20:57.098990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.688 [2024-05-15 18:20:57.099155] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8947.307 ms, result 0 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:09.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84341 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84341 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 84341 ']' 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:09.879 18:21:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.879 [2024-05-15 18:21:01.732788] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:09.879 [2024-05-15 18:21:01.732939] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84341 ] 00:29:09.879 [2024-05-15 18:21:01.900108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.879 [2024-05-15 18:21:02.184876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.815 [2024-05-15 18:21:03.065939] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:10.815 [2024-05-15 18:21:03.066033] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:10.815 [2024-05-15 18:21:03.208643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.208709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:10.815 [2024-05-15 18:21:03.208734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:10.815 [2024-05-15 18:21:03.208748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.208837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.208860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:10.815 [2024-05-15 18:21:03.208875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:29:10.815 [2024-05-15 18:21:03.208887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.208924] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:10.815 [2024-05-15 18:21:03.209974] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:10.815 [2024-05-15 18:21:03.210018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.210034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:10.815 [2024-05-15 18:21:03.210048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:29:10.815 [2024-05-15 18:21:03.210063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.212087] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:10.815 [2024-05-15 18:21:03.229181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.229267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:10.815 [2024-05-15 18:21:03.229290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.093 ms 00:29:10.815 [2024-05-15 18:21:03.229325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.229456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.229478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:10.815 [2024-05-15 18:21:03.229492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:10.815 [2024-05-15 18:21:03.229504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.238850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.238917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:10.815 [2024-05-15 18:21:03.238939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.211 ms 00:29:10.815 [2024-05-15 18:21:03.238962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.239040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.239062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:10.815 [2024-05-15 18:21:03.239077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:10.815 [2024-05-15 18:21:03.239090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.239169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.239187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:10.815 [2024-05-15 18:21:03.239200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:10.815 [2024-05-15 18:21:03.239224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.239272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:10.815 [2024-05-15 18:21:03.244455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.244506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:10.815 [2024-05-15 18:21:03.244531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.197 ms 00:29:10.815 [2024-05-15 18:21:03.244549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.244607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.244625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:10.815 [2024-05-15 18:21:03.244639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:10.815 [2024-05-15 18:21:03.244651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.244738] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:10.815 [2024-05-15 18:21:03.244773] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:29:10.815 [2024-05-15 18:21:03.244825] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:10.815 [2024-05-15 18:21:03.244855] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:29:10.815 [2024-05-15 18:21:03.244936] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:29:10.815 [2024-05-15 18:21:03.244953] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:10.815 [2024-05-15 18:21:03.244970] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:29:10.815 [2024-05-15 18:21:03.244987] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:10.815 [2024-05-15 18:21:03.245001] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:10.815 [2024-05-15 18:21:03.245014] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:10.815 [2024-05-15 18:21:03.245026] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:10.815 [2024-05-15 18:21:03.245038] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:29:10.815 [2024-05-15 18:21:03.245055] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:29:10.815 [2024-05-15 18:21:03.245068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.245081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:10.815 [2024-05-15 18:21:03.245094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.335 ms 00:29:10.815 [2024-05-15 18:21:03.245107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.245191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.815 [2024-05-15 18:21:03.245209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:10.815 [2024-05-15 18:21:03.245222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:10.815 [2024-05-15 18:21:03.245234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.815 [2024-05-15 18:21:03.245350] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:10.815 [2024-05-15 18:21:03.245371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:10.815 [2024-05-15 18:21:03.245385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:10.815 [2024-05-15 18:21:03.245397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.815 [2024-05-15 18:21:03.245410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:10.815 [2024-05-15 18:21:03.245422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:10.815 [2024-05-15 18:21:03.245434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:10.815 [2024-05-15 18:21:03.245445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:10.815 [2024-05-15 18:21:03.245459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:10.815 [2024-05-15 18:21:03.245470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.815 [2024-05-15 18:21:03.245481] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:10.815 [2024-05-15 18:21:03.245492] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:10.815 [2024-05-15 18:21:03.245503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.815 [2024-05-15 18:21:03.245515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:10.815 [2024-05-15 18:21:03.245528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:29:10.815 [2024-05-15 18:21:03.245539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.815 [2024-05-15 18:21:03.245551] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:10.815 [2024-05-15 18:21:03.245563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:29:10.815 [2024-05-15 18:21:03.245574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.815 [2024-05-15 18:21:03.245586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:29:10.815 [2024-05-15 18:21:03.245597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:29:10.815 [2024-05-15 18:21:03.245609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:29:10.815 [2024-05-15 18:21:03.245623] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:10.816 [2024-05-15 18:21:03.245634] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:10.816 [2024-05-15 18:21:03.245645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:10.816 [2024-05-15 18:21:03.245656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:10.816 [2024-05-15 18:21:03.245668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:29:10.816 [2024-05-15 18:21:03.245679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:10.816 [2024-05-15 18:21:03.245690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:10.816 [2024-05-15 18:21:03.245701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:10.816 [2024-05-15 18:21:03.245713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:10.816 [2024-05-15 18:21:03.245724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:10.816 [2024-05-15 18:21:03.245735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:29:10.816 [2024-05-15 18:21:03.245746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:10.816 [2024-05-15 18:21:03.245757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:10.816 [2024-05-15 18:21:03.245768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:10.816 [2024-05-15 18:21:03.245779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.816 [2024-05-15 18:21:03.245791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:10.816 [2024-05-15 18:21:03.245802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:29:10.816 [2024-05-15 18:21:03.245813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.816 [2024-05-15 18:21:03.245824] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:10.816 [2024-05-15 18:21:03.245836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:10.816 [2024-05-15 18:21:03.245848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:10.816 [2024-05-15 18:21:03.245860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:10.816 [2024-05-15 18:21:03.245874] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:10.816 [2024-05-15 18:21:03.245885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:10.816 [2024-05-15 18:21:03.245899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:10.816 [2024-05-15 18:21:03.245912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:10.816 [2024-05-15 18:21:03.245923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:10.816 [2024-05-15 18:21:03.245934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:10.816 [2024-05-15 18:21:03.245947] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:10.816 [2024-05-15 18:21:03.245962] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.245999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:10.816 [2024-05-15 18:21:03.246012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:29:10.816 [2024-05-15 18:21:03.246049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:29:10.816 [2024-05-15 18:21:03.246061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:29:10.816 [2024-05-15 18:21:03.246073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:29:10.816 [2024-05-15 18:21:03.246085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246134] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:29:10.816 [2024-05-15 18:21:03.246147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:29:10.816 [2024-05-15 18:21:03.246159] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:10.816 [2024-05-15 18:21:03.246178] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246191] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:10.816 [2024-05-15 18:21:03.246204] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:10.816 [2024-05-15 18:21:03.246216] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:10.816 [2024-05-15 18:21:03.246229] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:10.816 [2024-05-15 18:21:03.246243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.816 [2024-05-15 18:21:03.246257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:10.816 [2024-05-15 18:21:03.246270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.953 ms 00:29:10.816 [2024-05-15 18:21:03.246282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.816 [2024-05-15 18:21:03.268726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.816 [2024-05-15 18:21:03.268980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:10.816 [2024-05-15 18:21:03.269129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.913 ms 00:29:10.816 [2024-05-15 18:21:03.269183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.816 [2024-05-15 18:21:03.269370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.816 [2024-05-15 18:21:03.269433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:10.816 [2024-05-15 18:21:03.269629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:10.816 [2024-05-15 18:21:03.269682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.816 [2024-05-15 18:21:03.313412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.816 [2024-05-15 18:21:03.313689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:10.816 [2024-05-15 18:21:03.313848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 43.601 ms 00:29:10.816 [2024-05-15 18:21:03.313901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.816 [2024-05-15 18:21:03.314076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.816 [2024-05-15 18:21:03.314132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:10.816 [2024-05-15 18:21:03.314367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:10.816 [2024-05-15 18:21:03.314434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.816 [2024-05-15 18:21:03.315101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.315234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:11.076 [2024-05-15 18:21:03.315268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.550 ms 00:29:11.076 [2024-05-15 18:21:03.315282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.315360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.315380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:11.076 [2024-05-15 18:21:03.315395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:11.076 [2024-05-15 18:21:03.315407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.337070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.337144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:11.076 [2024-05-15 18:21:03.337168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.625 ms 00:29:11.076 [2024-05-15 18:21:03.337182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.354353] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:11.076 [2024-05-15 18:21:03.354458] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:11.076 [2024-05-15 18:21:03.354481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.354496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:11.076 [2024-05-15 18:21:03.354514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.089 ms 00:29:11.076 [2024-05-15 18:21:03.354527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.373440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.373520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:11.076 [2024-05-15 18:21:03.373543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.816 ms 00:29:11.076 [2024-05-15 18:21:03.373556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.389963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.390045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:11.076 [2024-05-15 18:21:03.390067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.287 ms 00:29:11.076 [2024-05-15 18:21:03.390080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.405624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.405693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:11.076 [2024-05-15 18:21:03.405714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.470 ms 00:29:11.076 [2024-05-15 18:21:03.405726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.406336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.406366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:11.076 [2024-05-15 18:21:03.406382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:29:11.076 [2024-05-15 18:21:03.406395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.486717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.486798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:11.076 [2024-05-15 18:21:03.486821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 80.289 ms 00:29:11.076 [2024-05-15 18:21:03.486834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.501394] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:11.076 [2024-05-15 18:21:03.502753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.502786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:11.076 [2024-05-15 18:21:03.502807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.815 ms 00:29:11.076 [2024-05-15 18:21:03.502827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.502951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.502972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:11.076 [2024-05-15 18:21:03.502986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:11.076 [2024-05-15 18:21:03.502999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.503078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.503097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:11.076 [2024-05-15 18:21:03.503110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:11.076 [2024-05-15 18:21:03.503122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.505283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.505338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:29:11.076 [2024-05-15 18:21:03.505354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.127 ms 00:29:11.076 [2024-05-15 18:21:03.505367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.505405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.505421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:11.076 [2024-05-15 18:21:03.505435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:11.076 [2024-05-15 18:21:03.505446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.505496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:11.076 [2024-05-15 18:21:03.505515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.505532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:11.076 [2024-05-15 18:21:03.505546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:11.076 [2024-05-15 18:21:03.505557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.537975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.538054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:11.076 [2024-05-15 18:21:03.538078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 32.384 ms 00:29:11.076 [2024-05-15 18:21:03.538092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.538233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.076 [2024-05-15 18:21:03.538253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:11.076 [2024-05-15 18:21:03.538267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:11.076 [2024-05-15 18:21:03.538280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.076 [2024-05-15 18:21:03.539788] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 330.599 ms, result 0 00:29:11.076 [2024-05-15 18:21:03.554507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.076 [2024-05-15 18:21:03.570607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:11.336 [2024-05-15 18:21:03.580366] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:11.336 [2024-05-15 18:21:03.580696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:11.336 18:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:11.336 18:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:29:11.336 18:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:11.336 18:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:11.336 18:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:11.595 [2024-05-15 18:21:03.852843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.595 [2024-05-15 18:21:03.852923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:11.595 [2024-05-15 18:21:03.852947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:11.595 [2024-05-15 18:21:03.852960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.595 [2024-05-15 18:21:03.853000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.595 [2024-05-15 18:21:03.853026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:11.595 [2024-05-15 18:21:03.853040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:11.595 [2024-05-15 18:21:03.853052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.595 [2024-05-15 18:21:03.853083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.595 [2024-05-15 18:21:03.853098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:11.595 [2024-05-15 18:21:03.853121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:11.595 [2024-05-15 18:21:03.853133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.595 [2024-05-15 18:21:03.853215] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.367 ms, result 0 00:29:11.595 true 00:29:11.595 18:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:11.853 { 00:29:11.853 "name": "ftl", 00:29:11.853 "properties": [ 00:29:11.853 { 00:29:11.853 "name": "superblock_version", 00:29:11.853 "value": 5, 00:29:11.853 "read-only": true 00:29:11.853 }, 00:29:11.853 { 00:29:11.853 "name": "base_device", 00:29:11.853 "bands": [ 00:29:11.853 { 00:29:11.853 "id": 0, 00:29:11.853 "state": "CLOSED", 00:29:11.853 "validity": 1.0 00:29:11.853 }, 00:29:11.853 { 00:29:11.853 "id": 1, 00:29:11.853 "state": "CLOSED", 00:29:11.853 "validity": 1.0 00:29:11.853 }, 00:29:11.853 { 00:29:11.853 "id": 2, 00:29:11.853 "state": "CLOSED", 00:29:11.853 "validity": 0.007843137254901933 00:29:11.853 }, 00:29:11.853 { 00:29:11.853 "id": 3, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 4, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 5, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 6, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 7, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 8, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 9, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 10, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 11, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 12, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 13, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 14, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 15, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 16, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 17, 00:29:11.854 "state": "FREE", 00:29:11.854 "validity": 0.0 00:29:11.854 } 00:29:11.854 ], 00:29:11.854 "read-only": true 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "name": "cache_device", 00:29:11.854 "type": "bdev", 00:29:11.854 "chunks": [ 00:29:11.854 { 00:29:11.854 "id": 0, 00:29:11.854 "state": "OPEN", 00:29:11.854 "utilization": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 1, 00:29:11.854 "state": "OPEN", 00:29:11.854 "utilization": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 2, 00:29:11.854 "state": "FREE", 00:29:11.854 "utilization": 0.0 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "id": 3, 00:29:11.854 "state": "FREE", 00:29:11.854 "utilization": 0.0 00:29:11.854 } 00:29:11.854 ], 00:29:11.854 "read-only": true 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "name": "verbose_mode", 00:29:11.854 "value": true, 00:29:11.854 "unit": "", 00:29:11.854 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:11.854 }, 00:29:11.854 { 00:29:11.854 "name": "prep_upgrade_on_shutdown", 00:29:11.854 "value": false, 00:29:11.854 "unit": "", 00:29:11.854 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:11.854 } 00:29:11.854 ] 00:29:11.854 } 00:29:11.854 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:11.854 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:11.854 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:12.112 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:12.112 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:12.112 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:12.113 Validate MD5 checksum, iteration 1 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:12.113 18:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:12.371 [2024-05-15 18:21:04.678342] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:12.371 [2024-05-15 18:21:04.678792] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84384 ] 00:29:12.371 [2024-05-15 18:21:04.841114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.938 [2024-05-15 18:21:05.172915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.934  Copying: 485/1024 [MB] (485 MBps) Copying: 921/1024 [MB] (436 MBps) Copying: 1024/1024 [MB] (average 460 MBps) 00:29:16.934 00:29:17.192 18:21:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:17.192 18:21:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:19.164 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:19.165 Validate MD5 checksum, iteration 2 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2570fce8b68650057c5b2e32a0d2b7e4 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2570fce8b68650057c5b2e32a0d2b7e4 != \2\5\7\0\f\c\e\8\b\6\8\6\5\0\0\5\7\c\5\b\2\e\3\2\a\0\d\2\b\7\e\4 ]] 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:19.165 18:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:19.423 [2024-05-15 18:21:11.711851] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:19.423 [2024-05-15 18:21:11.712784] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84451 ] 00:29:19.423 [2024-05-15 18:21:11.882756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.989 [2024-05-15 18:21:12.210422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.094  Copying: 473/1024 [MB] (473 MBps) Copying: 958/1024 [MB] (485 MBps) Copying: 1024/1024 [MB] (average 478 MBps) 00:29:24.094 00:29:24.094 18:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:24.094 18:21:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c29bd0424937973884a41a45c683bd5f 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c29bd0424937973884a41a45c683bd5f != \c\2\9\b\d\0\4\2\4\9\3\7\9\7\3\8\8\4\a\4\1\a\4\5\c\6\8\3\b\d\5\f ]] 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84341 ]] 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84341 00:29:26.049 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:26.307 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84525 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84525 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 84525 ']' 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:26.308 18:21:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.308 [2024-05-15 18:21:18.659545] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:26.308 [2024-05-15 18:21:18.659690] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84525 ] 00:29:26.566 [2024-05-15 18:21:18.824571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.566 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 84341 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:26.824 [2024-05-15 18:21:19.068240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.762 [2024-05-15 18:21:19.971486] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:27.762 [2024-05-15 18:21:19.971570] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:27.762 [2024-05-15 18:21:20.114253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.762 [2024-05-15 18:21:20.114351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:27.762 [2024-05-15 18:21:20.114375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:27.762 [2024-05-15 18:21:20.114388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.762 [2024-05-15 18:21:20.114482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.762 [2024-05-15 18:21:20.114506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:27.762 [2024-05-15 18:21:20.114520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:27.763 [2024-05-15 18:21:20.114532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.114569] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:27.763 [2024-05-15 18:21:20.115647] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:27.763 [2024-05-15 18:21:20.115691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.115707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:27.763 [2024-05-15 18:21:20.115721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.129 ms 00:29:27.763 [2024-05-15 18:21:20.115733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.116262] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:27.763 [2024-05-15 18:21:20.138947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.139036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:27.763 [2024-05-15 18:21:20.139060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.682 ms 00:29:27.763 [2024-05-15 18:21:20.139074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.152492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.152575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:27.763 [2024-05-15 18:21:20.152598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:29:27.763 [2024-05-15 18:21:20.152611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.153183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.153220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:27.763 [2024-05-15 18:21:20.153237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.431 ms 00:29:27.763 [2024-05-15 18:21:20.153249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.153328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.153351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:27.763 [2024-05-15 18:21:20.153366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:27.763 [2024-05-15 18:21:20.153396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.153443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.153460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:27.763 [2024-05-15 18:21:20.153474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:27.763 [2024-05-15 18:21:20.153486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.153524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:27.763 [2024-05-15 18:21:20.158585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.158652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:27.763 [2024-05-15 18:21:20.158670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.069 ms 00:29:27.763 [2024-05-15 18:21:20.158683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.158734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.158752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:27.763 [2024-05-15 18:21:20.158766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:27.763 [2024-05-15 18:21:20.158785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.158856] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:27.763 [2024-05-15 18:21:20.158893] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:29:27.763 [2024-05-15 18:21:20.158946] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:27.763 [2024-05-15 18:21:20.158972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:29:27.763 [2024-05-15 18:21:20.159057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:29:27.763 [2024-05-15 18:21:20.159073] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:27.763 [2024-05-15 18:21:20.159095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:29:27.763 [2024-05-15 18:21:20.159117] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159132] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159145] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:27.763 [2024-05-15 18:21:20.159157] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:27.763 [2024-05-15 18:21:20.159169] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:29:27.763 [2024-05-15 18:21:20.159181] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:29:27.763 [2024-05-15 18:21:20.159193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.159206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:27.763 [2024-05-15 18:21:20.159219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.341 ms 00:29:27.763 [2024-05-15 18:21:20.159231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.159345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.763 [2024-05-15 18:21:20.159366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:27.763 [2024-05-15 18:21:20.159379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:29:27.763 [2024-05-15 18:21:20.159391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.763 [2024-05-15 18:21:20.159487] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:27.763 [2024-05-15 18:21:20.159506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:27.763 [2024-05-15 18:21:20.159519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:27.763 [2024-05-15 18:21:20.159554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:27.763 [2024-05-15 18:21:20.159576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:27.763 [2024-05-15 18:21:20.159587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:27.763 [2024-05-15 18:21:20.159596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:27.763 [2024-05-15 18:21:20.159617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:27.763 [2024-05-15 18:21:20.159628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:27.763 [2024-05-15 18:21:20.159660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:27.763 [2024-05-15 18:21:20.159691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:29:27.763 [2024-05-15 18:21:20.159702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:29:27.763 [2024-05-15 18:21:20.159745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:29:27.763 [2024-05-15 18:21:20.159757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:27.763 [2024-05-15 18:21:20.159780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:27.763 [2024-05-15 18:21:20.159791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:27.763 [2024-05-15 18:21:20.159812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:29:27.763 [2024-05-15 18:21:20.159823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:27.763 [2024-05-15 18:21:20.159845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:27.763 [2024-05-15 18:21:20.159856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:27.763 [2024-05-15 18:21:20.159877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:29:27.763 [2024-05-15 18:21:20.159888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:27.763 [2024-05-15 18:21:20.159909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:27.763 [2024-05-15 18:21:20.159921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:27.763 [2024-05-15 18:21:20.159943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:29:27.763 [2024-05-15 18:21:20.159953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.159964] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:27.763 [2024-05-15 18:21:20.159982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:27.763 [2024-05-15 18:21:20.160010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:27.763 [2024-05-15 18:21:20.160023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.763 [2024-05-15 18:21:20.160036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:27.763 [2024-05-15 18:21:20.160047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:27.763 [2024-05-15 18:21:20.160058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:27.763 [2024-05-15 18:21:20.160069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:27.763 [2024-05-15 18:21:20.160079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:27.763 [2024-05-15 18:21:20.160096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:27.763 [2024-05-15 18:21:20.160135] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:27.763 [2024-05-15 18:21:20.160167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.763 [2024-05-15 18:21:20.160191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:27.764 [2024-05-15 18:21:20.160211] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:29:27.764 [2024-05-15 18:21:20.160271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:29:27.764 [2024-05-15 18:21:20.160288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:29:27.764 [2024-05-15 18:21:20.160328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:29:27.764 [2024-05-15 18:21:20.160348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:29:27.764 [2024-05-15 18:21:20.160453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:29:27.764 [2024-05-15 18:21:20.160473] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:27.764 [2024-05-15 18:21:20.160489] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160503] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:27.764 [2024-05-15 18:21:20.160515] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:27.764 [2024-05-15 18:21:20.160527] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:27.764 [2024-05-15 18:21:20.160539] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:27.764 [2024-05-15 18:21:20.160555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.160567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:27.764 [2024-05-15 18:21:20.160581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.112 ms 00:29:27.764 [2024-05-15 18:21:20.160601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.181907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.182188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:27.764 [2024-05-15 18:21:20.182349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.223 ms 00:29:27.764 [2024-05-15 18:21:20.182473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.182592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.182650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:27.764 [2024-05-15 18:21:20.182764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:27.764 [2024-05-15 18:21:20.182889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.227475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.227788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:27.764 [2024-05-15 18:21:20.227915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 44.433 ms 00:29:27.764 [2024-05-15 18:21:20.228076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.228205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.228268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:27.764 [2024-05-15 18:21:20.228406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:27.764 [2024-05-15 18:21:20.228465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.228770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.228946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:27.764 [2024-05-15 18:21:20.228974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:29:27.764 [2024-05-15 18:21:20.228988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.229057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.229077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:27.764 [2024-05-15 18:21:20.229112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:27.764 [2024-05-15 18:21:20.229128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.252360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.252639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:27.764 [2024-05-15 18:21:20.252672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.192 ms 00:29:27.764 [2024-05-15 18:21:20.252687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.764 [2024-05-15 18:21:20.252909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.764 [2024-05-15 18:21:20.252933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:27.764 [2024-05-15 18:21:20.252948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:27.764 [2024-05-15 18:21:20.252960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.276308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.276379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:28.024 [2024-05-15 18:21:20.276402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.297 ms 00:29:28.024 [2024-05-15 18:21:20.276416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.289744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.289823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:28.024 [2024-05-15 18:21:20.289846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:29:28.024 [2024-05-15 18:21:20.289859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.373712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.373817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:28.024 [2024-05-15 18:21:20.373842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 83.734 ms 00:29:28.024 [2024-05-15 18:21:20.373856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.374019] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:28.024 [2024-05-15 18:21:20.374083] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:28.024 [2024-05-15 18:21:20.374132] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:28.024 [2024-05-15 18:21:20.374179] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:28.024 [2024-05-15 18:21:20.374192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.374205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:28.024 [2024-05-15 18:21:20.374219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.227 ms 00:29:28.024 [2024-05-15 18:21:20.374231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.374359] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:28.024 [2024-05-15 18:21:20.374383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.374396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:28.024 [2024-05-15 18:21:20.374410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:28.024 [2024-05-15 18:21:20.374422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.395704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.395784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:28.024 [2024-05-15 18:21:20.395807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.244 ms 00:29:28.024 [2024-05-15 18:21:20.395820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.408395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.408472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:28.024 [2024-05-15 18:21:20.408494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:28.024 [2024-05-15 18:21:20.408506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.408623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.024 [2024-05-15 18:21:20.408645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:29:28.024 [2024-05-15 18:21:20.408659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:28.024 [2024-05-15 18:21:20.408671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.024 [2024-05-15 18:21:20.408919] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:29:28.662 [2024-05-15 18:21:20.855885] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:29:28.662 [2024-05-15 18:21:20.856122] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:29:28.921 [2024-05-15 18:21:21.302576] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:29:28.921 [2024-05-15 18:21:21.302713] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:28.921 [2024-05-15 18:21:21.302737] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:28.921 [2024-05-15 18:21:21.302755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.302769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:28.921 [2024-05-15 18:21:21.302787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 894.050 ms 00:29:28.921 [2024-05-15 18:21:21.302800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.302852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.302879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:28.921 [2024-05-15 18:21:21.302893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:28.921 [2024-05-15 18:21:21.302905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.319624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:28.921 [2024-05-15 18:21:21.319936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.319962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:28.921 [2024-05-15 18:21:21.319980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.002 ms 00:29:28.921 [2024-05-15 18:21:21.320014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.320844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.320886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:29:28.921 [2024-05-15 18:21:21.320904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.664 ms 00:29:28.921 [2024-05-15 18:21:21.320917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.323361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.323399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:28.921 [2024-05-15 18:21:21.323415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.404 ms 00:29:28.921 [2024-05-15 18:21:21.323427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.359550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.359657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:29:28.921 [2024-05-15 18:21:21.359682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.073 ms 00:29:28.921 [2024-05-15 18:21:21.359695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.359931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.359972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:28.921 [2024-05-15 18:21:21.360004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:28.921 [2024-05-15 18:21:21.360019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.362178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.362223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:29:28.921 [2024-05-15 18:21:21.362244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.121 ms 00:29:28.921 [2024-05-15 18:21:21.362256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.362319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.362338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:28.921 [2024-05-15 18:21:21.362352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:28.921 [2024-05-15 18:21:21.362364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.362412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:28.921 [2024-05-15 18:21:21.362430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.362442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:28.921 [2024-05-15 18:21:21.362458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:28.921 [2024-05-15 18:21:21.362470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.362542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.921 [2024-05-15 18:21:21.362565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:28.921 [2024-05-15 18:21:21.362578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:28.921 [2024-05-15 18:21:21.362596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.921 [2024-05-15 18:21:21.364050] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1249.248 ms, result 0 00:29:28.921 [2024-05-15 18:21:21.376475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.921 [2024-05-15 18:21:21.392530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:28.921 [2024-05-15 18:21:21.402245] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:28.921 [2024-05-15 18:21:21.402606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:29.179 Validate MD5 checksum, iteration 1 00:29:29.179 18:21:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:29.179 18:21:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:29:29.179 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:29.180 18:21:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:29.180 [2024-05-15 18:21:21.523062] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:29.180 [2024-05-15 18:21:21.523517] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84564 ] 00:29:29.438 [2024-05-15 18:21:21.690874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.697 [2024-05-15 18:21:22.004898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.000  Copying: 488/1024 [MB] (488 MBps) Copying: 958/1024 [MB] (470 MBps) Copying: 1024/1024 [MB] (average 473 MBps) 00:29:35.000 00:29:35.000 18:21:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:35.000 18:21:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:37.527 Validate MD5 checksum, iteration 2 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2570fce8b68650057c5b2e32a0d2b7e4 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2570fce8b68650057c5b2e32a0d2b7e4 != \2\5\7\0\f\c\e\8\b\6\8\6\5\0\0\5\7\c\5\b\2\e\3\2\a\0\d\2\b\7\e\4 ]] 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:37.527 18:21:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:37.527 [2024-05-15 18:21:29.760054] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:37.527 [2024-05-15 18:21:29.760287] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84645 ] 00:29:37.527 [2024-05-15 18:21:29.974723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.784 [2024-05-15 18:21:30.244769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.893  Copying: 471/1024 [MB] (471 MBps) Copying: 947/1024 [MB] (476 MBps) Copying: 1024/1024 [MB] (average 472 MBps) 00:29:45.893 00:29:45.893 18:21:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:45.893 18:21:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c29bd0424937973884a41a45c683bd5f 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c29bd0424937973884a41a45c683bd5f != \c\2\9\b\d\0\4\2\4\9\3\7\9\7\3\8\8\4\a\4\1\a\4\5\c\6\8\3\b\d\5\f ]] 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:47.796 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84525 ]] 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84525 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 84525 ']' 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 84525 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84525 00:29:48.056 killing process with pid 84525 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84525' 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 84525 00:29:48.056 [2024-05-15 18:21:40.411108] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:48.056 18:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 84525 00:29:48.994 [2024-05-15 18:21:41.428418] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:48.994 [2024-05-15 18:21:41.445918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.445985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:48.994 [2024-05-15 18:21:41.446008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:48.994 [2024-05-15 18:21:41.446020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.446054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:48.994 [2024-05-15 18:21:41.449769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.449801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:48.994 [2024-05-15 18:21:41.449818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.691 ms 00:29:48.994 [2024-05-15 18:21:41.449830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.450111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.450132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:48.994 [2024-05-15 18:21:41.450147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:29:48.994 [2024-05-15 18:21:41.450174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.451446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.451474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:48.994 [2024-05-15 18:21:41.451489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.234 ms 00:29:48.994 [2024-05-15 18:21:41.451502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.452744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.452815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:29:48.994 [2024-05-15 18:21:41.452830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.199 ms 00:29:48.994 [2024-05-15 18:21:41.452842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.465868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.465915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:48.994 [2024-05-15 18:21:41.465934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.959 ms 00:29:48.994 [2024-05-15 18:21:41.465947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.472523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.472567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:48.994 [2024-05-15 18:21:41.472584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.519 ms 00:29:48.994 [2024-05-15 18:21:41.472597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.472690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.472711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:48.994 [2024-05-15 18:21:41.472726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:29:48.994 [2024-05-15 18:21:41.472738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.994 [2024-05-15 18:21:41.485314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.994 [2024-05-15 18:21:41.485365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:48.994 [2024-05-15 18:21:41.485398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.552 ms 00:29:48.994 [2024-05-15 18:21:41.485409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.497773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.255 [2024-05-15 18:21:41.497847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:49.255 [2024-05-15 18:21:41.497864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.304 ms 00:29:49.255 [2024-05-15 18:21:41.497875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.510456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.255 [2024-05-15 18:21:41.510496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:49.255 [2024-05-15 18:21:41.510520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.538 ms 00:29:49.255 [2024-05-15 18:21:41.510532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.523065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.255 [2024-05-15 18:21:41.523128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:49.255 [2024-05-15 18:21:41.523144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.453 ms 00:29:49.255 [2024-05-15 18:21:41.523155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.523207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:49.255 [2024-05-15 18:21:41.523253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:49.255 [2024-05-15 18:21:41.523284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:49.255 [2024-05-15 18:21:41.523297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:49.255 [2024-05-15 18:21:41.523327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:49.255 [2024-05-15 18:21:41.523556] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:49.255 [2024-05-15 18:21:41.523569] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 57dbfc00-3576-4c34-bd35-7230d366eca1 00:29:49.255 [2024-05-15 18:21:41.523581] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:49.255 [2024-05-15 18:21:41.523593] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:49.255 [2024-05-15 18:21:41.523604] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:49.255 [2024-05-15 18:21:41.523616] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:49.255 [2024-05-15 18:21:41.523627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:49.255 [2024-05-15 18:21:41.523639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:49.255 [2024-05-15 18:21:41.523651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:49.255 [2024-05-15 18:21:41.523661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:49.255 [2024-05-15 18:21:41.523679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:49.255 [2024-05-15 18:21:41.523691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.255 [2024-05-15 18:21:41.523703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:49.255 [2024-05-15 18:21:41.523732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.485 ms 00:29:49.255 [2024-05-15 18:21:41.523744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.541533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.255 [2024-05-15 18:21:41.541621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:49.255 [2024-05-15 18:21:41.541640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.752 ms 00:29:49.255 [2024-05-15 18:21:41.541653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.541987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.255 [2024-05-15 18:21:41.542016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:49.255 [2024-05-15 18:21:41.542031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:29:49.255 [2024-05-15 18:21:41.542043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.603304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.255 [2024-05-15 18:21:41.603376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:49.255 [2024-05-15 18:21:41.603397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.255 [2024-05-15 18:21:41.603426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.603496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.255 [2024-05-15 18:21:41.603522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:49.255 [2024-05-15 18:21:41.603535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.255 [2024-05-15 18:21:41.603547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.603665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.255 [2024-05-15 18:21:41.603686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:49.255 [2024-05-15 18:21:41.603699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.255 [2024-05-15 18:21:41.603711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.603738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.255 [2024-05-15 18:21:41.603753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:49.255 [2024-05-15 18:21:41.603774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.255 [2024-05-15 18:21:41.603786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.255 [2024-05-15 18:21:41.716388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.255 [2024-05-15 18:21:41.716483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:49.255 [2024-05-15 18:21:41.716504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.255 [2024-05-15 18:21:41.716518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.759205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.515 [2024-05-15 18:21:41.759280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:49.515 [2024-05-15 18:21:41.759347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.515 [2024-05-15 18:21:41.759361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.759466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.515 [2024-05-15 18:21:41.759492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:49.515 [2024-05-15 18:21:41.759514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.515 [2024-05-15 18:21:41.759527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.759587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.515 [2024-05-15 18:21:41.759605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:49.515 [2024-05-15 18:21:41.759619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.515 [2024-05-15 18:21:41.759637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.759774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.515 [2024-05-15 18:21:41.759796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:49.515 [2024-05-15 18:21:41.759809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.515 [2024-05-15 18:21:41.759821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.759881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.515 [2024-05-15 18:21:41.759908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:49.515 [2024-05-15 18:21:41.759922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.515 [2024-05-15 18:21:41.759934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.759987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.515 [2024-05-15 18:21:41.760018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:49.515 [2024-05-15 18:21:41.760031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.515 [2024-05-15 18:21:41.760043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.760100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:49.515 [2024-05-15 18:21:41.760118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:49.515 [2024-05-15 18:21:41.760131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:49.515 [2024-05-15 18:21:41.760149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.515 [2024-05-15 18:21:41.760319] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 314.345 ms, result 0 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.889 Remove shared memory files 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84341 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:50.889 ************************************ 00:29:50.889 END TEST ftl_upgrade_shutdown 00:29:50.889 ************************************ 00:29:50.889 00:29:50.889 real 1m42.155s 00:29:50.889 user 2m23.185s 00:29:50.889 sys 0m24.840s 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:50.889 18:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.889 18:21:43 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:50.889 18:21:43 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:50.889 18:21:43 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:29:50.889 18:21:43 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:50.889 18:21:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:50.889 ************************************ 00:29:50.889 START TEST ftl_restore_fast 00:29:50.889 ************************************ 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:50.889 * Looking for test storage... 00:29:50.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vEi3AghkXT 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84847 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84847 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 84847 ']' 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:50.889 18:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:50.889 [2024-05-15 18:21:43.332754] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:50.889 [2024-05-15 18:21:43.333139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84847 ] 00:29:51.147 [2024-05-15 18:21:43.513551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.405 [2024-05-15 18:21:43.763668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:52.339 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:29:52.597 18:21:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:29:52.855 { 00:29:52.855 "name": "nvme0n1", 00:29:52.855 "aliases": [ 00:29:52.855 "772dd65c-a8f8-4149-9f75-71dff0fa193d" 00:29:52.855 ], 00:29:52.855 "product_name": "NVMe disk", 00:29:52.855 "block_size": 4096, 00:29:52.855 "num_blocks": 1310720, 00:29:52.855 "uuid": "772dd65c-a8f8-4149-9f75-71dff0fa193d", 00:29:52.855 "assigned_rate_limits": { 00:29:52.855 "rw_ios_per_sec": 0, 00:29:52.855 "rw_mbytes_per_sec": 0, 00:29:52.855 "r_mbytes_per_sec": 0, 00:29:52.855 "w_mbytes_per_sec": 0 00:29:52.855 }, 00:29:52.855 "claimed": true, 00:29:52.855 "claim_type": "read_many_write_one", 00:29:52.855 "zoned": false, 00:29:52.855 "supported_io_types": { 00:29:52.855 "read": true, 00:29:52.855 "write": true, 00:29:52.855 "unmap": true, 00:29:52.855 "write_zeroes": true, 00:29:52.855 "flush": true, 00:29:52.855 "reset": true, 00:29:52.855 "compare": true, 00:29:52.855 "compare_and_write": false, 00:29:52.855 "abort": true, 00:29:52.855 "nvme_admin": true, 00:29:52.855 "nvme_io": true 00:29:52.855 }, 00:29:52.855 "driver_specific": { 00:29:52.855 "nvme": [ 00:29:52.855 { 00:29:52.855 "pci_address": "0000:00:11.0", 00:29:52.855 "trid": { 00:29:52.855 "trtype": "PCIe", 00:29:52.855 "traddr": "0000:00:11.0" 00:29:52.855 }, 00:29:52.855 "ctrlr_data": { 00:29:52.855 "cntlid": 0, 00:29:52.855 "vendor_id": "0x1b36", 00:29:52.855 "model_number": "QEMU NVMe Ctrl", 00:29:52.855 "serial_number": "12341", 00:29:52.855 "firmware_revision": "8.0.0", 00:29:52.855 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:52.855 "oacs": { 00:29:52.855 "security": 0, 00:29:52.855 "format": 1, 00:29:52.855 "firmware": 0, 00:29:52.855 "ns_manage": 1 00:29:52.855 }, 00:29:52.855 "multi_ctrlr": false, 00:29:52.855 "ana_reporting": false 00:29:52.855 }, 00:29:52.855 "vs": { 00:29:52.855 "nvme_version": "1.4" 00:29:52.855 }, 00:29:52.855 "ns_data": { 00:29:52.855 "id": 1, 00:29:52.855 "can_share": false 00:29:52.855 } 00:29:52.855 } 00:29:52.855 ], 00:29:52.855 "mp_policy": "active_passive" 00:29:52.855 } 00:29:52.855 } 00:29:52.855 ]' 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:52.855 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:53.114 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=195f8b3a-0a91-4fb5-9875-b385a85326cb 00:29:53.114 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:53.114 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 195f8b3a-0a91-4fb5-9875-b385a85326cb 00:29:53.372 18:21:45 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:53.630 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=169e95ce-24e0-48ef-a88c-41168f029f61 00:29:53.630 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 169e95ce-24e0-48ef-a88c-41168f029f61 00:29:53.888 18:21:46 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:53.888 18:21:46 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:29:53.889 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:29:54.147 { 00:29:54.147 "name": "1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b", 00:29:54.147 "aliases": [ 00:29:54.147 "lvs/nvme0n1p0" 00:29:54.147 ], 00:29:54.147 "product_name": "Logical Volume", 00:29:54.147 "block_size": 4096, 00:29:54.147 "num_blocks": 26476544, 00:29:54.147 "uuid": "1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b", 00:29:54.147 "assigned_rate_limits": { 00:29:54.147 "rw_ios_per_sec": 0, 00:29:54.147 "rw_mbytes_per_sec": 0, 00:29:54.147 "r_mbytes_per_sec": 0, 00:29:54.147 "w_mbytes_per_sec": 0 00:29:54.147 }, 00:29:54.147 "claimed": false, 00:29:54.147 "zoned": false, 00:29:54.147 "supported_io_types": { 00:29:54.147 "read": true, 00:29:54.147 "write": true, 00:29:54.147 "unmap": true, 00:29:54.147 "write_zeroes": true, 00:29:54.147 "flush": false, 00:29:54.147 "reset": true, 00:29:54.147 "compare": false, 00:29:54.147 "compare_and_write": false, 00:29:54.147 "abort": false, 00:29:54.147 "nvme_admin": false, 00:29:54.147 "nvme_io": false 00:29:54.147 }, 00:29:54.147 "driver_specific": { 00:29:54.147 "lvol": { 00:29:54.147 "lvol_store_uuid": "169e95ce-24e0-48ef-a88c-41168f029f61", 00:29:54.147 "base_bdev": "nvme0n1", 00:29:54.147 "thin_provision": true, 00:29:54.147 "num_allocated_clusters": 0, 00:29:54.147 "snapshot": false, 00:29:54.147 "clone": false, 00:29:54.147 "esnap_clone": false 00:29:54.147 } 00:29:54.147 } 00:29:54.147 } 00:29:54.147 ]' 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:54.147 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:54.713 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:54.713 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:54.713 18:21:46 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:54.713 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:54.713 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:29:54.713 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:29:54.713 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:29:54.714 18:21:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:54.714 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:29:54.714 { 00:29:54.714 "name": "1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b", 00:29:54.714 "aliases": [ 00:29:54.714 "lvs/nvme0n1p0" 00:29:54.714 ], 00:29:54.714 "product_name": "Logical Volume", 00:29:54.714 "block_size": 4096, 00:29:54.714 "num_blocks": 26476544, 00:29:54.714 "uuid": "1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b", 00:29:54.714 "assigned_rate_limits": { 00:29:54.714 "rw_ios_per_sec": 0, 00:29:54.714 "rw_mbytes_per_sec": 0, 00:29:54.714 "r_mbytes_per_sec": 0, 00:29:54.714 "w_mbytes_per_sec": 0 00:29:54.714 }, 00:29:54.714 "claimed": false, 00:29:54.714 "zoned": false, 00:29:54.714 "supported_io_types": { 00:29:54.714 "read": true, 00:29:54.714 "write": true, 00:29:54.714 "unmap": true, 00:29:54.714 "write_zeroes": true, 00:29:54.714 "flush": false, 00:29:54.714 "reset": true, 00:29:54.714 "compare": false, 00:29:54.714 "compare_and_write": false, 00:29:54.714 "abort": false, 00:29:54.714 "nvme_admin": false, 00:29:54.714 "nvme_io": false 00:29:54.714 }, 00:29:54.714 "driver_specific": { 00:29:54.714 "lvol": { 00:29:54.714 "lvol_store_uuid": "169e95ce-24e0-48ef-a88c-41168f029f61", 00:29:54.714 "base_bdev": "nvme0n1", 00:29:54.714 "thin_provision": true, 00:29:54.714 "num_allocated_clusters": 0, 00:29:54.714 "snapshot": false, 00:29:54.714 "clone": false, 00:29:54.714 "esnap_clone": false 00:29:54.714 } 00:29:54.714 } 00:29:54.714 } 00:29:54.714 ]' 00:29:54.714 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:29:54.972 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:29:54.972 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:29:54.972 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:29:54.972 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:29:54.972 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:29:54.972 18:21:47 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:54.972 18:21:47 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:55.230 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:55.230 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:55.230 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:55.230 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:29:55.230 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:29:55.230 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:29:55.230 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:29:55.488 { 00:29:55.488 "name": "1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b", 00:29:55.488 "aliases": [ 00:29:55.488 "lvs/nvme0n1p0" 00:29:55.488 ], 00:29:55.488 "product_name": "Logical Volume", 00:29:55.488 "block_size": 4096, 00:29:55.488 "num_blocks": 26476544, 00:29:55.488 "uuid": "1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b", 00:29:55.488 "assigned_rate_limits": { 00:29:55.488 "rw_ios_per_sec": 0, 00:29:55.488 "rw_mbytes_per_sec": 0, 00:29:55.488 "r_mbytes_per_sec": 0, 00:29:55.488 "w_mbytes_per_sec": 0 00:29:55.488 }, 00:29:55.488 "claimed": false, 00:29:55.488 "zoned": false, 00:29:55.488 "supported_io_types": { 00:29:55.488 "read": true, 00:29:55.488 "write": true, 00:29:55.488 "unmap": true, 00:29:55.488 "write_zeroes": true, 00:29:55.488 "flush": false, 00:29:55.488 "reset": true, 00:29:55.488 "compare": false, 00:29:55.488 "compare_and_write": false, 00:29:55.488 "abort": false, 00:29:55.488 "nvme_admin": false, 00:29:55.488 "nvme_io": false 00:29:55.488 }, 00:29:55.488 "driver_specific": { 00:29:55.488 "lvol": { 00:29:55.488 "lvol_store_uuid": "169e95ce-24e0-48ef-a88c-41168f029f61", 00:29:55.488 "base_bdev": "nvme0n1", 00:29:55.488 "thin_provision": true, 00:29:55.488 "num_allocated_clusters": 0, 00:29:55.488 "snapshot": false, 00:29:55.488 "clone": false, 00:29:55.488 "esnap_clone": false 00:29:55.488 } 00:29:55.488 } 00:29:55.488 } 00:29:55.488 ]' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b --l2p_dram_limit 10' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:55.488 18:21:47 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1f4fe3a4-b2a6-4a00-b065-e40c2dbafd0b --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:55.747 [2024-05-15 18:21:48.175363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.747 [2024-05-15 18:21:48.175469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:55.747 [2024-05-15 18:21:48.175515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:55.747 [2024-05-15 18:21:48.175543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.747 [2024-05-15 18:21:48.175680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.747 [2024-05-15 18:21:48.175713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:55.747 [2024-05-15 18:21:48.175751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:55.747 [2024-05-15 18:21:48.175773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.747 [2024-05-15 18:21:48.175835] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:55.747 [2024-05-15 18:21:48.177452] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:55.747 [2024-05-15 18:21:48.177532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.747 [2024-05-15 18:21:48.177560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:55.747 [2024-05-15 18:21:48.177597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:29:55.747 [2024-05-15 18:21:48.177620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.177803] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 17e6d8e7-7482-43f5-9327-0822f86d5edf 00:29:55.748 [2024-05-15 18:21:48.180170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.180244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:55.748 [2024-05-15 18:21:48.180284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:55.748 [2024-05-15 18:21:48.180344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.192575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.192694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:55.748 [2024-05-15 18:21:48.192732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.050 ms 00:29:55.748 [2024-05-15 18:21:48.192763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.192985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.193028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:55.748 [2024-05-15 18:21:48.193054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:29:55.748 [2024-05-15 18:21:48.193084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.193231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.193277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:55.748 [2024-05-15 18:21:48.193345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:55.748 [2024-05-15 18:21:48.193378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.193442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:55.748 [2024-05-15 18:21:48.201390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.201466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:55.748 [2024-05-15 18:21:48.201502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.952 ms 00:29:55.748 [2024-05-15 18:21:48.201528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.201630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.201662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:55.748 [2024-05-15 18:21:48.201693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:55.748 [2024-05-15 18:21:48.201715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.201812] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:55.748 [2024-05-15 18:21:48.202018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:29:55.748 [2024-05-15 18:21:48.202064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:55.748 [2024-05-15 18:21:48.202094] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:29:55.748 [2024-05-15 18:21:48.202131] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:55.748 [2024-05-15 18:21:48.202161] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:55.748 [2024-05-15 18:21:48.202191] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:55.748 [2024-05-15 18:21:48.202214] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:55.748 [2024-05-15 18:21:48.202239] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:29:55.748 [2024-05-15 18:21:48.202262] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:29:55.748 [2024-05-15 18:21:48.202331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.202358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:55.748 [2024-05-15 18:21:48.202399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:29:55.748 [2024-05-15 18:21:48.202422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.202533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.748 [2024-05-15 18:21:48.202577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:55.748 [2024-05-15 18:21:48.202605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:55.748 [2024-05-15 18:21:48.202627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.748 [2024-05-15 18:21:48.202751] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:55.748 [2024-05-15 18:21:48.202782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:55.748 [2024-05-15 18:21:48.202809] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.748 [2024-05-15 18:21:48.202827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.748 [2024-05-15 18:21:48.202842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:55.748 [2024-05-15 18:21:48.202854] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:55.748 [2024-05-15 18:21:48.202868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:55.748 [2024-05-15 18:21:48.202880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:55.748 [2024-05-15 18:21:48.202895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:55.748 [2024-05-15 18:21:48.202906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.748 [2024-05-15 18:21:48.202920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:55.748 [2024-05-15 18:21:48.202931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:55.748 [2024-05-15 18:21:48.202945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.748 [2024-05-15 18:21:48.202957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:55.748 [2024-05-15 18:21:48.202972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:29:55.748 [2024-05-15 18:21:48.202984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.748 [2024-05-15 18:21:48.202998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:55.748 [2024-05-15 18:21:48.203010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:29:55.748 [2024-05-15 18:21:48.203025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.748 [2024-05-15 18:21:48.203047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:29:55.748 [2024-05-15 18:21:48.203061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:29:55.748 [2024-05-15 18:21:48.203081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:29:55.748 [2024-05-15 18:21:48.203106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:55.748 [2024-05-15 18:21:48.203127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:55.748 [2024-05-15 18:21:48.203149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:55.748 [2024-05-15 18:21:48.203167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:55.748 [2024-05-15 18:21:48.203191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:29:55.748 [2024-05-15 18:21:48.203210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:55.748 [2024-05-15 18:21:48.203232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:55.748 [2024-05-15 18:21:48.203253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:55.748 [2024-05-15 18:21:48.203286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:55.748 [2024-05-15 18:21:48.203325] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:55.748 [2024-05-15 18:21:48.203358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:29:55.748 [2024-05-15 18:21:48.203379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:55.748 [2024-05-15 18:21:48.203410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:55.748 [2024-05-15 18:21:48.203431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:55.748 [2024-05-15 18:21:48.203455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.748 [2024-05-15 18:21:48.203475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:55.748 [2024-05-15 18:21:48.203499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:29:55.748 [2024-05-15 18:21:48.203520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.748 [2024-05-15 18:21:48.203546] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:55.748 [2024-05-15 18:21:48.203568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:55.748 [2024-05-15 18:21:48.203594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.748 [2024-05-15 18:21:48.203628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.748 [2024-05-15 18:21:48.203656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:55.748 [2024-05-15 18:21:48.203678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:55.748 [2024-05-15 18:21:48.203705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:55.748 [2024-05-15 18:21:48.203728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:55.748 [2024-05-15 18:21:48.203755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:55.748 [2024-05-15 18:21:48.203778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:55.748 [2024-05-15 18:21:48.203812] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:55.748 [2024-05-15 18:21:48.203839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.748 [2024-05-15 18:21:48.203877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:55.748 [2024-05-15 18:21:48.203902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:29:55.748 [2024-05-15 18:21:48.203931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:29:55.748 [2024-05-15 18:21:48.203956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:29:55.748 [2024-05-15 18:21:48.203984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:29:55.748 [2024-05-15 18:21:48.204023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:29:55.749 [2024-05-15 18:21:48.204053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:29:55.749 [2024-05-15 18:21:48.204076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:29:55.749 [2024-05-15 18:21:48.204105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:29:55.749 [2024-05-15 18:21:48.204129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:29:55.749 [2024-05-15 18:21:48.204157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:29:55.749 [2024-05-15 18:21:48.204194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:29:55.749 [2024-05-15 18:21:48.204222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:29:55.749 [2024-05-15 18:21:48.204244] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:55.749 [2024-05-15 18:21:48.204280] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.749 [2024-05-15 18:21:48.204329] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:55.749 [2024-05-15 18:21:48.204361] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:55.749 [2024-05-15 18:21:48.204386] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:55.749 [2024-05-15 18:21:48.204416] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:55.749 [2024-05-15 18:21:48.204444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.749 [2024-05-15 18:21:48.204473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:55.749 [2024-05-15 18:21:48.204503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:29:55.749 [2024-05-15 18:21:48.204531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.749 [2024-05-15 18:21:48.229220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.749 [2024-05-15 18:21:48.229313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:55.749 [2024-05-15 18:21:48.229337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.548 ms 00:29:55.749 [2024-05-15 18:21:48.229353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.749 [2024-05-15 18:21:48.229485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.749 [2024-05-15 18:21:48.229506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:55.749 [2024-05-15 18:21:48.229521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:55.749 [2024-05-15 18:21:48.229536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.007 [2024-05-15 18:21:48.273006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.007 [2024-05-15 18:21:48.273084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:56.007 [2024-05-15 18:21:48.273107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.381 ms 00:29:56.007 [2024-05-15 18:21:48.273124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.007 [2024-05-15 18:21:48.273202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.008 [2024-05-15 18:21:48.273222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:56.008 [2024-05-15 18:21:48.273237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:56.008 [2024-05-15 18:21:48.273252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.008 [2024-05-15 18:21:48.273891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.008 [2024-05-15 18:21:48.273922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:56.008 [2024-05-15 18:21:48.273938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:29:56.008 [2024-05-15 18:21:48.273955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.008 [2024-05-15 18:21:48.274108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.008 [2024-05-15 18:21:48.274130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:56.008 [2024-05-15 18:21:48.274144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:29:56.008 [2024-05-15 18:21:48.274161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.008 [2024-05-15 18:21:48.295727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.008 [2024-05-15 18:21:48.295790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:56.008 [2024-05-15 18:21:48.295813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.535 ms 00:29:56.008 [2024-05-15 18:21:48.295829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.008 [2024-05-15 18:21:48.311168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:56.008 [2024-05-15 18:21:48.315334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.008 [2024-05-15 18:21:48.315375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:56.008 [2024-05-15 18:21:48.315402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.352 ms 00:29:56.008 [2024-05-15 18:21:48.315417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.008 [2024-05-15 18:21:48.403166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.008 [2024-05-15 18:21:48.403274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:56.008 [2024-05-15 18:21:48.403324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.673 ms 00:29:56.008 [2024-05-15 18:21:48.403348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.008 [2024-05-15 18:21:48.403473] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:29:56.008 [2024-05-15 18:21:48.403503] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:29:59.296 [2024-05-15 18:21:51.251422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.251499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:59.296 [2024-05-15 18:21:51.251529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2847.938 ms 00:29:59.296 [2024-05-15 18:21:51.251543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.251782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.251802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:59.296 [2024-05-15 18:21:51.251819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:29:59.296 [2024-05-15 18:21:51.251832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.282227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.282282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:59.296 [2024-05-15 18:21:51.282316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.323 ms 00:29:59.296 [2024-05-15 18:21:51.282331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.312122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.312174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:59.296 [2024-05-15 18:21:51.312199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.733 ms 00:29:59.296 [2024-05-15 18:21:51.312213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.312664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.312696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:59.296 [2024-05-15 18:21:51.312716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:29:59.296 [2024-05-15 18:21:51.312729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.392434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.392496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:59.296 [2024-05-15 18:21:51.392525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.633 ms 00:29:59.296 [2024-05-15 18:21:51.392539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.424868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.424942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:59.296 [2024-05-15 18:21:51.424965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.263 ms 00:29:59.296 [2024-05-15 18:21:51.424979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.427129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.427170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:29:59.296 [2024-05-15 18:21:51.427190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:29:59.296 [2024-05-15 18:21:51.427203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.457709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.457756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:59.296 [2024-05-15 18:21:51.457778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.429 ms 00:29:59.296 [2024-05-15 18:21:51.457791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.457853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.457872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:59.296 [2024-05-15 18:21:51.457888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:59.296 [2024-05-15 18:21:51.457900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.458024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.296 [2024-05-15 18:21:51.458044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:59.296 [2024-05-15 18:21:51.458059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:59.296 [2024-05-15 18:21:51.458072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.296 [2024-05-15 18:21:51.459348] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3283.517 ms, result 0 00:29:59.296 { 00:29:59.296 "name": "ftl0", 00:29:59.296 "uuid": "17e6d8e7-7482-43f5-9327-0822f86d5edf" 00:29:59.296 } 00:29:59.296 18:21:51 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:59.296 18:21:51 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:59.296 18:21:51 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:59.296 18:21:51 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:59.555 [2024-05-15 18:21:52.014817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.555 [2024-05-15 18:21:52.014891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:59.555 [2024-05-15 18:21:52.014916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:59.555 [2024-05-15 18:21:52.014935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.555 [2024-05-15 18:21:52.014976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:59.555 [2024-05-15 18:21:52.018626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.555 [2024-05-15 18:21:52.018663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:59.555 [2024-05-15 18:21:52.018693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:29:59.555 [2024-05-15 18:21:52.018706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.555 [2024-05-15 18:21:52.019045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.555 [2024-05-15 18:21:52.019066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:59.555 [2024-05-15 18:21:52.019082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:29:59.555 [2024-05-15 18:21:52.019095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.555 [2024-05-15 18:21:52.022284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.555 [2024-05-15 18:21:52.022324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:59.555 [2024-05-15 18:21:52.022343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:29:59.555 [2024-05-15 18:21:52.022356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.555 [2024-05-15 18:21:52.028958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.555 [2024-05-15 18:21:52.029005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:29:59.555 [2024-05-15 18:21:52.029041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.558 ms 00:29:59.555 [2024-05-15 18:21:52.029054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.060737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.816 [2024-05-15 18:21:52.060805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:59.816 [2024-05-15 18:21:52.060829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.587 ms 00:29:59.816 [2024-05-15 18:21:52.060843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.079341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.816 [2024-05-15 18:21:52.079404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:59.816 [2024-05-15 18:21:52.079428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.429 ms 00:29:59.816 [2024-05-15 18:21:52.079446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.079662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.816 [2024-05-15 18:21:52.079684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:59.816 [2024-05-15 18:21:52.079701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:29:59.816 [2024-05-15 18:21:52.079714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.110748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.816 [2024-05-15 18:21:52.110820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:59.816 [2024-05-15 18:21:52.110851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.996 ms 00:29:59.816 [2024-05-15 18:21:52.110865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.141355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.816 [2024-05-15 18:21:52.141417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:59.816 [2024-05-15 18:21:52.141441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.417 ms 00:29:59.816 [2024-05-15 18:21:52.141455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.172123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.816 [2024-05-15 18:21:52.172197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:59.816 [2024-05-15 18:21:52.172222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.581 ms 00:29:59.816 [2024-05-15 18:21:52.172235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.202710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.816 [2024-05-15 18:21:52.202770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:59.816 [2024-05-15 18:21:52.202793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.294 ms 00:29:59.816 [2024-05-15 18:21:52.202806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.816 [2024-05-15 18:21:52.202866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:59.816 [2024-05-15 18:21:52.202893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.202915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.202928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.202943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.202956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.202971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.202984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:59.816 [2024-05-15 18:21:52.203284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.203985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:59.817 [2024-05-15 18:21:52.204359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:59.818 [2024-05-15 18:21:52.204372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:59.818 [2024-05-15 18:21:52.204388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:59.818 [2024-05-15 18:21:52.204401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:59.818 [2024-05-15 18:21:52.204421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:59.818 [2024-05-15 18:21:52.204434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:59.818 [2024-05-15 18:21:52.204449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:59.818 [2024-05-15 18:21:52.204471] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:59.818 [2024-05-15 18:21:52.204486] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 17e6d8e7-7482-43f5-9327-0822f86d5edf 00:29:59.818 [2024-05-15 18:21:52.204499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:59.818 [2024-05-15 18:21:52.204513] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:59.818 [2024-05-15 18:21:52.204524] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:59.818 [2024-05-15 18:21:52.204543] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:59.818 [2024-05-15 18:21:52.204555] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:59.818 [2024-05-15 18:21:52.204570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:59.818 [2024-05-15 18:21:52.204582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:59.818 [2024-05-15 18:21:52.204595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:59.818 [2024-05-15 18:21:52.204606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:59.818 [2024-05-15 18:21:52.204624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.818 [2024-05-15 18:21:52.204645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:59.818 [2024-05-15 18:21:52.204665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:29:59.818 [2024-05-15 18:21:52.204678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.818 [2024-05-15 18:21:52.221752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.818 [2024-05-15 18:21:52.221807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:59.818 [2024-05-15 18:21:52.221830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.998 ms 00:29:59.818 [2024-05-15 18:21:52.221844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.818 [2024-05-15 18:21:52.222113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.818 [2024-05-15 18:21:52.222138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:59.818 [2024-05-15 18:21:52.222156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:29:59.818 [2024-05-15 18:21:52.222169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.818 [2024-05-15 18:21:52.281804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.818 [2024-05-15 18:21:52.281874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.818 [2024-05-15 18:21:52.281897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.818 [2024-05-15 18:21:52.281911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.818 [2024-05-15 18:21:52.282007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.818 [2024-05-15 18:21:52.282024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.818 [2024-05-15 18:21:52.282043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.818 [2024-05-15 18:21:52.282056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.818 [2024-05-15 18:21:52.282190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.818 [2024-05-15 18:21:52.282210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.818 [2024-05-15 18:21:52.282231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.818 [2024-05-15 18:21:52.282243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.818 [2024-05-15 18:21:52.282275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.818 [2024-05-15 18:21:52.282290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.818 [2024-05-15 18:21:52.282323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.818 [2024-05-15 18:21:52.282335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.391003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.391065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:00.084 [2024-05-15 18:21:52.391088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.391102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.432750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.432841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:00.084 [2024-05-15 18:21:52.432869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.432884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.432999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.433019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:00.084 [2024-05-15 18:21:52.433035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.433051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.433123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.433142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:00.084 [2024-05-15 18:21:52.433157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.433170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.433338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.433359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:00.084 [2024-05-15 18:21:52.433380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.433396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.433465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.433483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:00.084 [2024-05-15 18:21:52.433502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.433517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.433577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.433593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:00.084 [2024-05-15 18:21:52.433608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.433621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.433685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:00.084 [2024-05-15 18:21:52.433702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:00.084 [2024-05-15 18:21:52.433717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:00.084 [2024-05-15 18:21:52.433730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.084 [2024-05-15 18:21:52.433908] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.038 ms, result 0 00:30:00.084 true 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84847 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 84847 ']' 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 84847 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84847 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:00.084 killing process with pid 84847 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84847' 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 84847 00:30:00.084 18:21:52 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 84847 00:30:03.368 18:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:08.635 262144+0 records in 00:30:08.635 262144+0 records out 00:30:08.635 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.76646 s, 225 MB/s 00:30:08.635 18:22:00 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:10.021 18:22:02 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:10.021 [2024-05-15 18:22:02.356064] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:10.021 [2024-05-15 18:22:02.356210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85089 ] 00:30:10.021 [2024-05-15 18:22:02.521655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.588 [2024-05-15 18:22:02.816413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.847 [2024-05-15 18:22:03.160736] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.847 [2024-05-15 18:22:03.160811] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.847 [2024-05-15 18:22:03.317637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.317709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:10.847 [2024-05-15 18:22:03.317729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:10.847 [2024-05-15 18:22:03.317748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.317824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.317845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:10.847 [2024-05-15 18:22:03.317858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:10.847 [2024-05-15 18:22:03.317870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.317901] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:10.847 [2024-05-15 18:22:03.318831] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:10.847 [2024-05-15 18:22:03.318872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.318903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:10.847 [2024-05-15 18:22:03.318915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:30:10.847 [2024-05-15 18:22:03.318927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.320900] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:10.847 [2024-05-15 18:22:03.337781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.337866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:10.847 [2024-05-15 18:22:03.337888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.880 ms 00:30:10.847 [2024-05-15 18:22:03.337901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.337994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.338015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:10.847 [2024-05-15 18:22:03.338028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:10.847 [2024-05-15 18:22:03.338039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.347162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.347237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:10.847 [2024-05-15 18:22:03.347257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.011 ms 00:30:10.847 [2024-05-15 18:22:03.347270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.347401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.347424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:10.847 [2024-05-15 18:22:03.347448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:30:10.847 [2024-05-15 18:22:03.347460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.347537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.847 [2024-05-15 18:22:03.347554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:10.847 [2024-05-15 18:22:03.347567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:10.847 [2024-05-15 18:22:03.347578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.847 [2024-05-15 18:22:03.347613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:11.107 [2024-05-15 18:22:03.352647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.107 [2024-05-15 18:22:03.352686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.107 [2024-05-15 18:22:03.352703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.041 ms 00:30:11.107 [2024-05-15 18:22:03.352714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.107 [2024-05-15 18:22:03.352756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.107 [2024-05-15 18:22:03.352771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:11.107 [2024-05-15 18:22:03.352783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:11.107 [2024-05-15 18:22:03.352795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.107 [2024-05-15 18:22:03.352886] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:11.107 [2024-05-15 18:22:03.352919] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:30:11.107 [2024-05-15 18:22:03.352960] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:11.107 [2024-05-15 18:22:03.352980] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:30:11.107 [2024-05-15 18:22:03.353061] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:30:11.107 [2024-05-15 18:22:03.353077] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:11.107 [2024-05-15 18:22:03.353092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:30:11.107 [2024-05-15 18:22:03.353112] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353126] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353139] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:11.107 [2024-05-15 18:22:03.353150] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:11.107 [2024-05-15 18:22:03.353161] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:30:11.107 [2024-05-15 18:22:03.353171] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:30:11.107 [2024-05-15 18:22:03.353183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.107 [2024-05-15 18:22:03.353194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:11.107 [2024-05-15 18:22:03.353207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:30:11.107 [2024-05-15 18:22:03.353237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.107 [2024-05-15 18:22:03.353330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.107 [2024-05-15 18:22:03.353351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:11.107 [2024-05-15 18:22:03.353363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:11.107 [2024-05-15 18:22:03.353374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.107 [2024-05-15 18:22:03.353460] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:11.107 [2024-05-15 18:22:03.353476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:11.107 [2024-05-15 18:22:03.353494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:11.107 [2024-05-15 18:22:03.353527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:11.107 [2024-05-15 18:22:03.353561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.107 [2024-05-15 18:22:03.353581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:11.107 [2024-05-15 18:22:03.353592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:11.107 [2024-05-15 18:22:03.353602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.107 [2024-05-15 18:22:03.353613] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:11.107 [2024-05-15 18:22:03.353623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:30:11.107 [2024-05-15 18:22:03.353648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:11.107 [2024-05-15 18:22:03.353672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:30:11.107 [2024-05-15 18:22:03.353683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353694] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:30:11.107 [2024-05-15 18:22:03.353704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:30:11.107 [2024-05-15 18:22:03.353715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:11.107 [2024-05-15 18:22:03.353737] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:11.107 [2024-05-15 18:22:03.353769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:11.107 [2024-05-15 18:22:03.353801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:11.107 [2024-05-15 18:22:03.353835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:11.107 [2024-05-15 18:22:03.353867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.107 [2024-05-15 18:22:03.353887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:11.107 [2024-05-15 18:22:03.353898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:30:11.107 [2024-05-15 18:22:03.353909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.107 [2024-05-15 18:22:03.353919] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:11.107 [2024-05-15 18:22:03.353931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:11.107 [2024-05-15 18:22:03.353946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.107 [2024-05-15 18:22:03.353957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.107 [2024-05-15 18:22:03.353969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:11.107 [2024-05-15 18:22:03.353981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:11.107 [2024-05-15 18:22:03.353991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:11.107 [2024-05-15 18:22:03.354002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:11.107 [2024-05-15 18:22:03.354013] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:11.107 [2024-05-15 18:22:03.354025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:11.107 [2024-05-15 18:22:03.354037] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:11.107 [2024-05-15 18:22:03.354052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.107 [2024-05-15 18:22:03.354065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:11.107 [2024-05-15 18:22:03.354077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:30:11.107 [2024-05-15 18:22:03.354088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:30:11.107 [2024-05-15 18:22:03.354101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:30:11.107 [2024-05-15 18:22:03.354113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:30:11.108 [2024-05-15 18:22:03.354124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:30:11.108 [2024-05-15 18:22:03.354135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:30:11.108 [2024-05-15 18:22:03.354147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:30:11.108 [2024-05-15 18:22:03.354158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:30:11.108 [2024-05-15 18:22:03.354170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:30:11.108 [2024-05-15 18:22:03.354181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:30:11.108 [2024-05-15 18:22:03.354193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:30:11.108 [2024-05-15 18:22:03.354205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:30:11.108 [2024-05-15 18:22:03.354217] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:11.108 [2024-05-15 18:22:03.354230] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.108 [2024-05-15 18:22:03.354242] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.108 [2024-05-15 18:22:03.354254] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:11.108 [2024-05-15 18:22:03.354266] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:11.108 [2024-05-15 18:22:03.354277] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:11.108 [2024-05-15 18:22:03.354290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.354317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:11.108 [2024-05-15 18:22:03.354329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:30:11.108 [2024-05-15 18:22:03.354340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.376711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.376772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.108 [2024-05-15 18:22:03.376792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.307 ms 00:30:11.108 [2024-05-15 18:22:03.376804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.376930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.376954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:11.108 [2024-05-15 18:22:03.376967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:11.108 [2024-05-15 18:22:03.376985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.433098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.433167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.108 [2024-05-15 18:22:03.433193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.027 ms 00:30:11.108 [2024-05-15 18:22:03.433206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.433283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.433316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.108 [2024-05-15 18:22:03.433331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:11.108 [2024-05-15 18:22:03.433342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.433973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.433993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.108 [2024-05-15 18:22:03.434006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:30:11.108 [2024-05-15 18:22:03.434023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.434175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.434194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.108 [2024-05-15 18:22:03.434207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:30:11.108 [2024-05-15 18:22:03.434219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.454421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.454498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.108 [2024-05-15 18:22:03.454519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.167 ms 00:30:11.108 [2024-05-15 18:22:03.454531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.471434] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:11.108 [2024-05-15 18:22:03.471496] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:11.108 [2024-05-15 18:22:03.471524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.471540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:11.108 [2024-05-15 18:22:03.471556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.794 ms 00:30:11.108 [2024-05-15 18:22:03.471567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.501187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.501266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:11.108 [2024-05-15 18:22:03.501287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.556 ms 00:30:11.108 [2024-05-15 18:22:03.501310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.518050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.518111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:11.108 [2024-05-15 18:22:03.518131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.641 ms 00:30:11.108 [2024-05-15 18:22:03.518143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.533337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.533398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:11.108 [2024-05-15 18:22:03.533416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.136 ms 00:30:11.108 [2024-05-15 18:22:03.533428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.108 [2024-05-15 18:22:03.533923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.108 [2024-05-15 18:22:03.533945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:11.108 [2024-05-15 18:22:03.533959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:30:11.108 [2024-05-15 18:22:03.533970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.613546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.613617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:11.366 [2024-05-15 18:22:03.613638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.550 ms 00:30:11.366 [2024-05-15 18:22:03.613651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.628139] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:11.366 [2024-05-15 18:22:03.632383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.632432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:11.366 [2024-05-15 18:22:03.632460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.651 ms 00:30:11.366 [2024-05-15 18:22:03.632473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.632602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.632621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:11.366 [2024-05-15 18:22:03.632637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:11.366 [2024-05-15 18:22:03.632648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.632742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.632760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:11.366 [2024-05-15 18:22:03.632773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:11.366 [2024-05-15 18:22:03.632784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.634881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.634924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:30:11.366 [2024-05-15 18:22:03.634939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:30:11.366 [2024-05-15 18:22:03.634951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.634989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.635004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:11.366 [2024-05-15 18:22:03.635016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:11.366 [2024-05-15 18:22:03.635028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.635070] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:11.366 [2024-05-15 18:22:03.635087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.635098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:11.366 [2024-05-15 18:22:03.635114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:11.366 [2024-05-15 18:22:03.635125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.366 [2024-05-15 18:22:03.668213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.366 [2024-05-15 18:22:03.668318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:11.366 [2024-05-15 18:22:03.668340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.059 ms 00:30:11.366 [2024-05-15 18:22:03.668355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.367 [2024-05-15 18:22:03.668482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.367 [2024-05-15 18:22:03.668513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:11.367 [2024-05-15 18:22:03.668528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:11.367 [2024-05-15 18:22:03.668539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.367 [2024-05-15 18:22:03.669950] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.787 ms, result 0 00:30:51.139  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 84/1024 [MB] (29 MBps) Copying: 113/1024 [MB] (29 MBps) Copying: 141/1024 [MB] (27 MBps) Copying: 169/1024 [MB] (28 MBps) Copying: 197/1024 [MB] (27 MBps) Copying: 222/1024 [MB] (25 MBps) Copying: 249/1024 [MB] (26 MBps) Copying: 274/1024 [MB] (25 MBps) Copying: 302/1024 [MB] (27 MBps) Copying: 328/1024 [MB] (26 MBps) Copying: 354/1024 [MB] (26 MBps) Copying: 380/1024 [MB] (26 MBps) Copying: 406/1024 [MB] (26 MBps) Copying: 432/1024 [MB] (25 MBps) Copying: 457/1024 [MB] (25 MBps) Copying: 481/1024 [MB] (24 MBps) Copying: 507/1024 [MB] (25 MBps) Copying: 535/1024 [MB] (28 MBps) Copying: 560/1024 [MB] (25 MBps) Copying: 585/1024 [MB] (25 MBps) Copying: 612/1024 [MB] (26 MBps) Copying: 637/1024 [MB] (25 MBps) Copying: 664/1024 [MB] (26 MBps) Copying: 690/1024 [MB] (26 MBps) Copying: 715/1024 [MB] (25 MBps) Copying: 740/1024 [MB] (24 MBps) Copying: 764/1024 [MB] (24 MBps) Copying: 790/1024 [MB] (25 MBps) Copying: 813/1024 [MB] (23 MBps) Copying: 837/1024 [MB] (24 MBps) Copying: 860/1024 [MB] (22 MBps) Copying: 883/1024 [MB] (23 MBps) Copying: 906/1024 [MB] (22 MBps) Copying: 929/1024 [MB] (22 MBps) Copying: 951/1024 [MB] (22 MBps) Copying: 976/1024 [MB] (25 MBps) Copying: 1001/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-05-15 18:22:43.578168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.139 [2024-05-15 18:22:43.578232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:51.139 [2024-05-15 18:22:43.578267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:51.139 [2024-05-15 18:22:43.578281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.139 [2024-05-15 18:22:43.578375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:51.139 [2024-05-15 18:22:43.582067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.139 [2024-05-15 18:22:43.582117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:51.139 [2024-05-15 18:22:43.582147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.657 ms 00:30:51.139 [2024-05-15 18:22:43.582158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.139 [2024-05-15 18:22:43.584235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.139 [2024-05-15 18:22:43.584332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:51.139 [2024-05-15 18:22:43.584363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:30:51.139 [2024-05-15 18:22:43.584381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.139 [2024-05-15 18:22:43.584412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.139 [2024-05-15 18:22:43.584427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:51.139 [2024-05-15 18:22:43.584440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:51.139 [2024-05-15 18:22:43.584450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.139 [2024-05-15 18:22:43.584506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.139 [2024-05-15 18:22:43.584521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:51.139 [2024-05-15 18:22:43.584568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:51.139 [2024-05-15 18:22:43.584580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.139 [2024-05-15 18:22:43.584598] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:51.140 [2024-05-15 18:22:43.584615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.584980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:51.140 [2024-05-15 18:22:43.585414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:51.141 [2024-05-15 18:22:43.585839] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:51.141 [2024-05-15 18:22:43.585850] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 17e6d8e7-7482-43f5-9327-0822f86d5edf 00:30:51.141 [2024-05-15 18:22:43.585862] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:51.141 [2024-05-15 18:22:43.585873] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:51.141 [2024-05-15 18:22:43.585884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:51.141 [2024-05-15 18:22:43.585895] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:51.141 [2024-05-15 18:22:43.585905] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:51.141 [2024-05-15 18:22:43.585926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:51.141 [2024-05-15 18:22:43.585937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:51.141 [2024-05-15 18:22:43.585948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:51.141 [2024-05-15 18:22:43.585957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:51.141 [2024-05-15 18:22:43.585968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.141 [2024-05-15 18:22:43.585979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:51.141 [2024-05-15 18:22:43.585991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:30:51.141 [2024-05-15 18:22:43.586002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.141 [2024-05-15 18:22:43.602598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.141 [2024-05-15 18:22:43.602655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:51.141 [2024-05-15 18:22:43.602687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.574 ms 00:30:51.141 [2024-05-15 18:22:43.602702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.141 [2024-05-15 18:22:43.602989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.141 [2024-05-15 18:22:43.603022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:51.141 [2024-05-15 18:22:43.603036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:30:51.141 [2024-05-15 18:22:43.603047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.465 [2024-05-15 18:22:43.648080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.465 [2024-05-15 18:22:43.648138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:51.465 [2024-05-15 18:22:43.648163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.465 [2024-05-15 18:22:43.648175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.465 [2024-05-15 18:22:43.648250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.465 [2024-05-15 18:22:43.648265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:51.465 [2024-05-15 18:22:43.648276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.465 [2024-05-15 18:22:43.648288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.465 [2024-05-15 18:22:43.648417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.465 [2024-05-15 18:22:43.648452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:51.465 [2024-05-15 18:22:43.648465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.465 [2024-05-15 18:22:43.648483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.465 [2024-05-15 18:22:43.648505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.465 [2024-05-15 18:22:43.648519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:51.465 [2024-05-15 18:22:43.648530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.465 [2024-05-15 18:22:43.648541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.465 [2024-05-15 18:22:43.748565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.465 [2024-05-15 18:22:43.748636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:51.465 [2024-05-15 18:22:43.748655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.465 [2024-05-15 18:22:43.748676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.465 [2024-05-15 18:22:43.787229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.465 [2024-05-15 18:22:43.787324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:51.466 [2024-05-15 18:22:43.787358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.466 [2024-05-15 18:22:43.787370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.466 [2024-05-15 18:22:43.787441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.466 [2024-05-15 18:22:43.787456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:51.466 [2024-05-15 18:22:43.787468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.466 [2024-05-15 18:22:43.787487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.466 [2024-05-15 18:22:43.787536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.466 [2024-05-15 18:22:43.787564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:51.466 [2024-05-15 18:22:43.787592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.466 [2024-05-15 18:22:43.787603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.466 [2024-05-15 18:22:43.787700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.466 [2024-05-15 18:22:43.787718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:51.466 [2024-05-15 18:22:43.787730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.466 [2024-05-15 18:22:43.787741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.466 [2024-05-15 18:22:43.787777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.466 [2024-05-15 18:22:43.787810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:51.466 [2024-05-15 18:22:43.787822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.466 [2024-05-15 18:22:43.787833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.466 [2024-05-15 18:22:43.787877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.466 [2024-05-15 18:22:43.787909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:51.466 [2024-05-15 18:22:43.787922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.466 [2024-05-15 18:22:43.787933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.466 [2024-05-15 18:22:43.787989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:51.466 [2024-05-15 18:22:43.788017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:51.466 [2024-05-15 18:22:43.788031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:51.466 [2024-05-15 18:22:43.788043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.466 [2024-05-15 18:22:43.788188] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 210.009 ms, result 0 00:30:52.842 00:30:52.842 00:30:52.842 18:22:45 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:52.842 [2024-05-15 18:22:45.205585] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:52.842 [2024-05-15 18:22:45.205756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85499 ] 00:30:53.100 [2024-05-15 18:22:45.384062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.359 [2024-05-15 18:22:45.673282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.617 [2024-05-15 18:22:46.047363] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:53.617 [2024-05-15 18:22:46.047489] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:53.877 [2024-05-15 18:22:46.206065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.206142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:53.877 [2024-05-15 18:22:46.206162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:53.877 [2024-05-15 18:22:46.206194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.206273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.206293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:53.877 [2024-05-15 18:22:46.206306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:53.877 [2024-05-15 18:22:46.206316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.206361] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:53.877 [2024-05-15 18:22:46.207288] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:53.877 [2024-05-15 18:22:46.207374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.207403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:53.877 [2024-05-15 18:22:46.207415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:30:53.877 [2024-05-15 18:22:46.207426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.207916] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:53.877 [2024-05-15 18:22:46.207951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.207965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:53.877 [2024-05-15 18:22:46.207983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:53.877 [2024-05-15 18:22:46.207995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.208090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.208108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:53.877 [2024-05-15 18:22:46.208125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:53.877 [2024-05-15 18:22:46.208137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.208574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.208605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:53.877 [2024-05-15 18:22:46.208620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:30:53.877 [2024-05-15 18:22:46.208632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.208716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.208735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:53.877 [2024-05-15 18:22:46.208751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:53.877 [2024-05-15 18:22:46.208762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.208793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.208808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:53.877 [2024-05-15 18:22:46.208821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:53.877 [2024-05-15 18:22:46.208832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.208860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:53.877 [2024-05-15 18:22:46.214226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.214279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:53.877 [2024-05-15 18:22:46.214309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.372 ms 00:30:53.877 [2024-05-15 18:22:46.214322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.214358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.214380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:53.877 [2024-05-15 18:22:46.214402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:53.877 [2024-05-15 18:22:46.214413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.214467] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:53.877 [2024-05-15 18:22:46.214498] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:30:53.877 [2024-05-15 18:22:46.214552] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:53.877 [2024-05-15 18:22:46.214572] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:30:53.877 [2024-05-15 18:22:46.214660] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:30:53.877 [2024-05-15 18:22:46.214675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:53.877 [2024-05-15 18:22:46.214689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:30:53.877 [2024-05-15 18:22:46.214704] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:53.877 [2024-05-15 18:22:46.214718] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:53.877 [2024-05-15 18:22:46.214730] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:53.877 [2024-05-15 18:22:46.214742] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:53.877 [2024-05-15 18:22:46.214753] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:30:53.877 [2024-05-15 18:22:46.214764] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:30:53.877 [2024-05-15 18:22:46.214776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.214787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:53.877 [2024-05-15 18:22:46.214804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:30:53.877 [2024-05-15 18:22:46.214815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.214890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.877 [2024-05-15 18:22:46.214906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:53.877 [2024-05-15 18:22:46.214918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:53.877 [2024-05-15 18:22:46.214929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.877 [2024-05-15 18:22:46.215023] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:53.877 [2024-05-15 18:22:46.215050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:53.877 [2024-05-15 18:22:46.215064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:53.877 [2024-05-15 18:22:46.215103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215124] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:53.877 [2024-05-15 18:22:46.215134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:53.877 [2024-05-15 18:22:46.215155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:53.877 [2024-05-15 18:22:46.215165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:53.877 [2024-05-15 18:22:46.215175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:53.877 [2024-05-15 18:22:46.215185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:53.877 [2024-05-15 18:22:46.215195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:30:53.877 [2024-05-15 18:22:46.215206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215219] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:53.877 [2024-05-15 18:22:46.215241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:30:53.877 [2024-05-15 18:22:46.215252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:30:53.877 [2024-05-15 18:22:46.215273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:30:53.877 [2024-05-15 18:22:46.215284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:53.877 [2024-05-15 18:22:46.215324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:53.877 [2024-05-15 18:22:46.215355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:53.877 [2024-05-15 18:22:46.215386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:53.877 [2024-05-15 18:22:46.215417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:53.877 [2024-05-15 18:22:46.215447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:53.877 [2024-05-15 18:22:46.215468] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:53.877 [2024-05-15 18:22:46.215478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:30:53.877 [2024-05-15 18:22:46.215489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:53.877 [2024-05-15 18:22:46.215499] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:53.877 [2024-05-15 18:22:46.215510] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:53.877 [2024-05-15 18:22:46.215521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:53.877 [2024-05-15 18:22:46.215543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:53.877 [2024-05-15 18:22:46.215554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:53.877 [2024-05-15 18:22:46.215564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:53.877 [2024-05-15 18:22:46.215575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:53.877 [2024-05-15 18:22:46.215587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:53.877 [2024-05-15 18:22:46.215597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:53.877 [2024-05-15 18:22:46.215609] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:53.878 [2024-05-15 18:22:46.215629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:53.878 [2024-05-15 18:22:46.215658] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:53.878 [2024-05-15 18:22:46.215670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:30:53.878 [2024-05-15 18:22:46.215682] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:30:53.878 [2024-05-15 18:22:46.215694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:30:53.878 [2024-05-15 18:22:46.215705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:30:53.878 [2024-05-15 18:22:46.215716] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:30:53.878 [2024-05-15 18:22:46.215728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:30:53.878 [2024-05-15 18:22:46.215739] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:30:53.878 [2024-05-15 18:22:46.215750] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:30:53.878 [2024-05-15 18:22:46.215762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:30:53.878 [2024-05-15 18:22:46.215774] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:30:53.878 [2024-05-15 18:22:46.215786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:30:53.878 [2024-05-15 18:22:46.215797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:30:53.878 [2024-05-15 18:22:46.215809] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:53.878 [2024-05-15 18:22:46.215822] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:53.878 [2024-05-15 18:22:46.215835] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:53.878 [2024-05-15 18:22:46.215846] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:53.878 [2024-05-15 18:22:46.215858] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:53.878 [2024-05-15 18:22:46.215870] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:53.878 [2024-05-15 18:22:46.215882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.215895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:53.878 [2024-05-15 18:22:46.215911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:30:53.878 [2024-05-15 18:22:46.215922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.234746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.234801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:53.878 [2024-05-15 18:22:46.234822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.766 ms 00:30:53.878 [2024-05-15 18:22:46.234842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.234928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.234943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:53.878 [2024-05-15 18:22:46.234955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:53.878 [2024-05-15 18:22:46.234965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.289678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.289806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:53.878 [2024-05-15 18:22:46.289827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.623 ms 00:30:53.878 [2024-05-15 18:22:46.289840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.289933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.289951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:53.878 [2024-05-15 18:22:46.289972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:53.878 [2024-05-15 18:22:46.289983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.290148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.290178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:53.878 [2024-05-15 18:22:46.290194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:53.878 [2024-05-15 18:22:46.290205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.290358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.290383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:53.878 [2024-05-15 18:22:46.290397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:30:53.878 [2024-05-15 18:22:46.290409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.311345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.311441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:53.878 [2024-05-15 18:22:46.311459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.906 ms 00:30:53.878 [2024-05-15 18:22:46.311471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.311695] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:53.878 [2024-05-15 18:22:46.311717] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:53.878 [2024-05-15 18:22:46.311730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.311742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:53.878 [2024-05-15 18:22:46.311755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:53.878 [2024-05-15 18:22:46.311766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.325833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.325886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:53.878 [2024-05-15 18:22:46.325900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.029 ms 00:30:53.878 [2024-05-15 18:22:46.325917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.326084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.326107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:53.878 [2024-05-15 18:22:46.326121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:30:53.878 [2024-05-15 18:22:46.326147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.326224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.326243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:53.878 [2024-05-15 18:22:46.326256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:53.878 [2024-05-15 18:22:46.326267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.326653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.326684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:53.878 [2024-05-15 18:22:46.326711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:30:53.878 [2024-05-15 18:22:46.326723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.326746] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:53.878 [2024-05-15 18:22:46.326762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.326774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:53.878 [2024-05-15 18:22:46.326785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:53.878 [2024-05-15 18:22:46.326796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.340707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:53.878 [2024-05-15 18:22:46.340912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.340937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:53.878 [2024-05-15 18:22:46.340958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.062 ms 00:30:53.878 [2024-05-15 18:22:46.340984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.343246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.343301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:53.878 [2024-05-15 18:22:46.343316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.200 ms 00:30:53.878 [2024-05-15 18:22:46.343327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.343473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.343497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:53.878 [2024-05-15 18:22:46.343510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:53.878 [2024-05-15 18:22:46.343520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.344999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.345051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:30:53.878 [2024-05-15 18:22:46.345066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:30:53.878 [2024-05-15 18:22:46.345078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.345112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.345127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:53.878 [2024-05-15 18:22:46.345146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:53.878 [2024-05-15 18:22:46.345176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.878 [2024-05-15 18:22:46.345236] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:53.878 [2024-05-15 18:22:46.345255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.878 [2024-05-15 18:22:46.345267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:53.878 [2024-05-15 18:22:46.345279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:53.878 [2024-05-15 18:22:46.345290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.136 [2024-05-15 18:22:46.380017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.136 [2024-05-15 18:22:46.380101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:54.136 [2024-05-15 18:22:46.380129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.675 ms 00:30:54.136 [2024-05-15 18:22:46.380142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.137 [2024-05-15 18:22:46.380243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.137 [2024-05-15 18:22:46.380263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:54.137 [2024-05-15 18:22:46.380276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:54.137 [2024-05-15 18:22:46.380288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.137 [2024-05-15 18:22:46.381656] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 175.058 ms, result 0 00:31:34.515  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 73/1024 [MB] (25 MBps) Copying: 98/1024 [MB] (24 MBps) Copying: 125/1024 [MB] (26 MBps) Copying: 149/1024 [MB] (24 MBps) Copying: 173/1024 [MB] (24 MBps) Copying: 197/1024 [MB] (23 MBps) Copying: 225/1024 [MB] (27 MBps) Copying: 253/1024 [MB] (28 MBps) Copying: 281/1024 [MB] (27 MBps) Copying: 307/1024 [MB] (26 MBps) Copying: 333/1024 [MB] (26 MBps) Copying: 358/1024 [MB] (25 MBps) Copying: 387/1024 [MB] (28 MBps) Copying: 413/1024 [MB] (25 MBps) Copying: 438/1024 [MB] (25 MBps) Copying: 462/1024 [MB] (24 MBps) Copying: 485/1024 [MB] (22 MBps) Copying: 511/1024 [MB] (25 MBps) Copying: 538/1024 [MB] (26 MBps) Copying: 565/1024 [MB] (26 MBps) Copying: 591/1024 [MB] (26 MBps) Copying: 617/1024 [MB] (25 MBps) Copying: 641/1024 [MB] (24 MBps) Copying: 665/1024 [MB] (23 MBps) Copying: 690/1024 [MB] (24 MBps) Copying: 714/1024 [MB] (24 MBps) Copying: 739/1024 [MB] (24 MBps) Copying: 765/1024 [MB] (25 MBps) Copying: 791/1024 [MB] (26 MBps) Copying: 818/1024 [MB] (26 MBps) Copying: 838/1024 [MB] (19 MBps) Copying: 866/1024 [MB] (28 MBps) Copying: 894/1024 [MB] (27 MBps) Copying: 918/1024 [MB] (24 MBps) Copying: 942/1024 [MB] (23 MBps) Copying: 966/1024 [MB] (24 MBps) Copying: 990/1024 [MB] (24 MBps) Copying: 1016/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-05-15 18:23:26.966877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.515 [2024-05-15 18:23:26.967281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:34.515 [2024-05-15 18:23:26.967503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:34.515 [2024-05-15 18:23:26.967666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.515 [2024-05-15 18:23:26.967732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:34.515 [2024-05-15 18:23:26.973143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.515 [2024-05-15 18:23:26.973203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:34.515 [2024-05-15 18:23:26.973225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.375 ms 00:31:34.515 [2024-05-15 18:23:26.973241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.515 [2024-05-15 18:23:26.973598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.515 [2024-05-15 18:23:26.973635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:34.515 [2024-05-15 18:23:26.973655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:31:34.515 [2024-05-15 18:23:26.973671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.515 [2024-05-15 18:23:26.973717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.515 [2024-05-15 18:23:26.973737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:34.515 [2024-05-15 18:23:26.973762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:34.515 [2024-05-15 18:23:26.973778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.515 [2024-05-15 18:23:26.973851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.515 [2024-05-15 18:23:26.973873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:34.515 [2024-05-15 18:23:26.973889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:34.515 [2024-05-15 18:23:26.973904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.515 [2024-05-15 18:23:26.973942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:34.515 [2024-05-15 18:23:26.973965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.973985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:34.515 [2024-05-15 18:23:26.974962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.974978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.974995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:34.516 [2024-05-15 18:23:26.975694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:34.516 [2024-05-15 18:23:26.975710] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 17e6d8e7-7482-43f5-9327-0822f86d5edf 00:31:34.516 [2024-05-15 18:23:26.975727] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:34.516 [2024-05-15 18:23:26.975749] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:34.516 [2024-05-15 18:23:26.975764] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:34.516 [2024-05-15 18:23:26.975780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:34.516 [2024-05-15 18:23:26.975794] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:34.516 [2024-05-15 18:23:26.975810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:34.516 [2024-05-15 18:23:26.975825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:34.516 [2024-05-15 18:23:26.975840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:34.516 [2024-05-15 18:23:26.975854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:34.516 [2024-05-15 18:23:26.975869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.516 [2024-05-15 18:23:26.975885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:34.516 [2024-05-15 18:23:26.975901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:31:34.516 [2024-05-15 18:23:26.975916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.516 [2024-05-15 18:23:26.994406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.516 [2024-05-15 18:23:26.994499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:34.516 [2024-05-15 18:23:26.994518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.460 ms 00:31:34.516 [2024-05-15 18:23:26.994530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.516 [2024-05-15 18:23:26.994788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:34.516 [2024-05-15 18:23:26.994818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:34.516 [2024-05-15 18:23:26.994833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:31:34.516 [2024-05-15 18:23:26.994845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.775 [2024-05-15 18:23:27.045514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.775 [2024-05-15 18:23:27.045656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:34.775 [2024-05-15 18:23:27.045692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.775 [2024-05-15 18:23:27.045705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.775 [2024-05-15 18:23:27.045788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.775 [2024-05-15 18:23:27.045805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:34.775 [2024-05-15 18:23:27.045825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.775 [2024-05-15 18:23:27.045837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.775 [2024-05-15 18:23:27.045923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.775 [2024-05-15 18:23:27.045943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:34.775 [2024-05-15 18:23:27.045971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.775 [2024-05-15 18:23:27.045983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.775 [2024-05-15 18:23:27.046007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.046021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:34.776 [2024-05-15 18:23:27.046034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.046045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.162905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.162980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:34.776 [2024-05-15 18:23:27.163000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.163013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.205265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.205324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:34.776 [2024-05-15 18:23:27.205343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.205356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.205443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.205469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:34.776 [2024-05-15 18:23:27.205482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.205494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.205546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.205562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:34.776 [2024-05-15 18:23:27.205575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.205586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.205693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.205719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:34.776 [2024-05-15 18:23:27.205733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.205744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.205784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.205807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:34.776 [2024-05-15 18:23:27.205821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.205833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.205877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.205893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:34.776 [2024-05-15 18:23:27.205912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.205924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.205976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:34.776 [2024-05-15 18:23:27.205993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:34.776 [2024-05-15 18:23:27.206006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:34.776 [2024-05-15 18:23:27.206017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:34.776 [2024-05-15 18:23:27.206160] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 239.286 ms, result 0 00:31:36.189 00:31:36.189 00:31:36.189 18:23:28 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:38.720 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:38.720 18:23:30 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:38.720 [2024-05-15 18:23:30.875415] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:31:38.720 [2024-05-15 18:23:30.875587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85938 ] 00:31:38.720 [2024-05-15 18:23:31.051791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.978 [2024-05-15 18:23:31.341853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.237 [2024-05-15 18:23:31.687770] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:39.237 [2024-05-15 18:23:31.687855] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:39.497 [2024-05-15 18:23:31.844113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.497 [2024-05-15 18:23:31.844196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:39.497 [2024-05-15 18:23:31.844219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:39.497 [2024-05-15 18:23:31.844238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.497 [2024-05-15 18:23:31.844331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.497 [2024-05-15 18:23:31.844353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:39.497 [2024-05-15 18:23:31.844367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:39.497 [2024-05-15 18:23:31.844379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.497 [2024-05-15 18:23:31.844423] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:39.497 [2024-05-15 18:23:31.845330] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:39.497 [2024-05-15 18:23:31.845371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.497 [2024-05-15 18:23:31.845386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:39.497 [2024-05-15 18:23:31.845400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:31:39.497 [2024-05-15 18:23:31.845412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.497 [2024-05-15 18:23:31.845873] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:39.497 [2024-05-15 18:23:31.845915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.497 [2024-05-15 18:23:31.845930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:39.497 [2024-05-15 18:23:31.845944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:39.497 [2024-05-15 18:23:31.845956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.497 [2024-05-15 18:23:31.846013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.497 [2024-05-15 18:23:31.846031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:39.497 [2024-05-15 18:23:31.846048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:39.497 [2024-05-15 18:23:31.846060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.497 [2024-05-15 18:23:31.846497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.497 [2024-05-15 18:23:31.846526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:39.497 [2024-05-15 18:23:31.846541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:31:39.497 [2024-05-15 18:23:31.846553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.497 [2024-05-15 18:23:31.846637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.497 [2024-05-15 18:23:31.846667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:39.497 [2024-05-15 18:23:31.846684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:39.498 [2024-05-15 18:23:31.846696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.498 [2024-05-15 18:23:31.846729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.498 [2024-05-15 18:23:31.846745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:39.498 [2024-05-15 18:23:31.846757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:39.498 [2024-05-15 18:23:31.846768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.498 [2024-05-15 18:23:31.846798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:39.498 [2024-05-15 18:23:31.851988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.498 [2024-05-15 18:23:31.852035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:39.498 [2024-05-15 18:23:31.852052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.196 ms 00:31:39.498 [2024-05-15 18:23:31.852064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.498 [2024-05-15 18:23:31.852101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.498 [2024-05-15 18:23:31.852125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:39.498 [2024-05-15 18:23:31.852137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:39.498 [2024-05-15 18:23:31.852148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.498 [2024-05-15 18:23:31.852219] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:39.498 [2024-05-15 18:23:31.852252] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:31:39.498 [2024-05-15 18:23:31.852303] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:39.498 [2024-05-15 18:23:31.852327] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:31:39.498 [2024-05-15 18:23:31.852411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:31:39.498 [2024-05-15 18:23:31.852428] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:39.498 [2024-05-15 18:23:31.852443] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:31:39.498 [2024-05-15 18:23:31.852458] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:39.498 [2024-05-15 18:23:31.852472] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:39.498 [2024-05-15 18:23:31.852484] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:39.498 [2024-05-15 18:23:31.852495] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:39.498 [2024-05-15 18:23:31.852507] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:31:39.498 [2024-05-15 18:23:31.852518] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:31:39.498 [2024-05-15 18:23:31.852530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.498 [2024-05-15 18:23:31.852542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:39.498 [2024-05-15 18:23:31.852558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:31:39.498 [2024-05-15 18:23:31.852570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.498 [2024-05-15 18:23:31.852646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.498 [2024-05-15 18:23:31.852663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:39.498 [2024-05-15 18:23:31.852675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:39.498 [2024-05-15 18:23:31.852687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.498 [2024-05-15 18:23:31.852773] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:39.498 [2024-05-15 18:23:31.852790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:39.498 [2024-05-15 18:23:31.852802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:39.498 [2024-05-15 18:23:31.852818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.498 [2024-05-15 18:23:31.852830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:39.498 [2024-05-15 18:23:31.852841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:39.498 [2024-05-15 18:23:31.852852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:39.498 [2024-05-15 18:23:31.852862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:39.498 [2024-05-15 18:23:31.852873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:39.498 [2024-05-15 18:23:31.852884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:39.498 [2024-05-15 18:23:31.852895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:39.498 [2024-05-15 18:23:31.852905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:39.498 [2024-05-15 18:23:31.852916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:39.498 [2024-05-15 18:23:31.852927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:39.498 [2024-05-15 18:23:31.852940] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:31:39.498 [2024-05-15 18:23:31.852951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.498 [2024-05-15 18:23:31.852962] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:39.498 [2024-05-15 18:23:31.852986] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:31:39.498 [2024-05-15 18:23:31.852997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.498 [2024-05-15 18:23:31.853008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:31:39.498 [2024-05-15 18:23:31.853019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:31:39.498 [2024-05-15 18:23:31.853029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:31:39.498 [2024-05-15 18:23:31.853040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:39.498 [2024-05-15 18:23:31.853051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:39.498 [2024-05-15 18:23:31.853062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:39.498 [2024-05-15 18:23:31.853072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:39.498 [2024-05-15 18:23:31.853083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:31:39.498 [2024-05-15 18:23:31.853094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:39.498 [2024-05-15 18:23:31.853104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:39.498 [2024-05-15 18:23:31.853115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:39.498 [2024-05-15 18:23:31.853126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:39.498 [2024-05-15 18:23:31.853136] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:39.498 [2024-05-15 18:23:31.853147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:31:39.498 [2024-05-15 18:23:31.853158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:39.498 [2024-05-15 18:23:31.853168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:39.498 [2024-05-15 18:23:31.853179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:39.498 [2024-05-15 18:23:31.853190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:39.498 [2024-05-15 18:23:31.853200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:39.498 [2024-05-15 18:23:31.853217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:31:39.498 [2024-05-15 18:23:31.853228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:39.498 [2024-05-15 18:23:31.853238] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:39.498 [2024-05-15 18:23:31.853250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:39.498 [2024-05-15 18:23:31.853261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:39.498 [2024-05-15 18:23:31.853273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:39.498 [2024-05-15 18:23:31.853285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:39.498 [2024-05-15 18:23:31.853312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:39.498 [2024-05-15 18:23:31.853326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:39.498 [2024-05-15 18:23:31.853337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:39.498 [2024-05-15 18:23:31.853349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:39.498 [2024-05-15 18:23:31.853360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:39.498 [2024-05-15 18:23:31.853372] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:39.498 [2024-05-15 18:23:31.853393] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:39.498 [2024-05-15 18:23:31.853406] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:39.498 [2024-05-15 18:23:31.853418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:31:39.498 [2024-05-15 18:23:31.853430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:31:39.498 [2024-05-15 18:23:31.853442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:31:39.498 [2024-05-15 18:23:31.853454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:31:39.498 [2024-05-15 18:23:31.853466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:31:39.498 [2024-05-15 18:23:31.853478] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:31:39.498 [2024-05-15 18:23:31.853490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:31:39.498 [2024-05-15 18:23:31.853502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:31:39.498 [2024-05-15 18:23:31.853514] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:31:39.498 [2024-05-15 18:23:31.853525] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:31:39.498 [2024-05-15 18:23:31.853537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:31:39.498 [2024-05-15 18:23:31.853549] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:31:39.498 [2024-05-15 18:23:31.853561] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:39.498 [2024-05-15 18:23:31.853581] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:39.498 [2024-05-15 18:23:31.853595] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:39.499 [2024-05-15 18:23:31.853607] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:39.499 [2024-05-15 18:23:31.853619] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:39.499 [2024-05-15 18:23:31.853632] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:39.499 [2024-05-15 18:23:31.853645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.853656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:39.499 [2024-05-15 18:23:31.853672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:31:39.499 [2024-05-15 18:23:31.853684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.871983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.872048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:39.499 [2024-05-15 18:23:31.872081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.237 ms 00:31:39.499 [2024-05-15 18:23:31.872094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.872209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.872226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:39.499 [2024-05-15 18:23:31.872239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:31:39.499 [2024-05-15 18:23:31.872250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.934456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.934529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:39.499 [2024-05-15 18:23:31.934554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.111 ms 00:31:39.499 [2024-05-15 18:23:31.934569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.934655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.934677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:39.499 [2024-05-15 18:23:31.934693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:39.499 [2024-05-15 18:23:31.934707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.934897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.934938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:39.499 [2024-05-15 18:23:31.934956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:31:39.499 [2024-05-15 18:23:31.934971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.935127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.935156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:39.499 [2024-05-15 18:23:31.935172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:31:39.499 [2024-05-15 18:23:31.935187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.958793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.958872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:39.499 [2024-05-15 18:23:31.958896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.567 ms 00:31:39.499 [2024-05-15 18:23:31.958912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.959171] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:39.499 [2024-05-15 18:23:31.959212] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:39.499 [2024-05-15 18:23:31.959232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.959248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:39.499 [2024-05-15 18:23:31.959275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:31:39.499 [2024-05-15 18:23:31.959349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.976375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.976459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:39.499 [2024-05-15 18:23:31.976483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.969 ms 00:31:39.499 [2024-05-15 18:23:31.976511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.976724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.976745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:39.499 [2024-05-15 18:23:31.976761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:31:39.499 [2024-05-15 18:23:31.976776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.976873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.976895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:39.499 [2024-05-15 18:23:31.976911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:39.499 [2024-05-15 18:23:31.976926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.977536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.977570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:39.499 [2024-05-15 18:23:31.977607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:31:39.499 [2024-05-15 18:23:31.977622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.977659] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:39.499 [2024-05-15 18:23:31.977680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.977694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:39.499 [2024-05-15 18:23:31.977709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:39.499 [2024-05-15 18:23:31.977723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.499 [2024-05-15 18:23:31.996455] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:39.499 [2024-05-15 18:23:31.996822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.499 [2024-05-15 18:23:31.996864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:39.499 [2024-05-15 18:23:31.996886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.067 ms 00:31:39.499 [2024-05-15 18:23:31.996902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:31.999830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.758 [2024-05-15 18:23:31.999869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:39.758 [2024-05-15 18:23:31.999887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:31:39.758 [2024-05-15 18:23:31.999901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:32.000068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.758 [2024-05-15 18:23:32.000110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:39.758 [2024-05-15 18:23:32.000128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:39.758 [2024-05-15 18:23:32.000143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:32.001815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.758 [2024-05-15 18:23:32.001860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:31:39.758 [2024-05-15 18:23:32.001878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:31:39.758 [2024-05-15 18:23:32.001893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:32.001935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.758 [2024-05-15 18:23:32.001960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:39.758 [2024-05-15 18:23:32.001980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:39.758 [2024-05-15 18:23:32.001994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:32.002063] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:39.758 [2024-05-15 18:23:32.002086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.758 [2024-05-15 18:23:32.002100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:39.758 [2024-05-15 18:23:32.002116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:39.758 [2024-05-15 18:23:32.002130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:32.041616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.758 [2024-05-15 18:23:32.041681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:39.758 [2024-05-15 18:23:32.041704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.448 ms 00:31:39.758 [2024-05-15 18:23:32.041719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:32.041819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:39.758 [2024-05-15 18:23:32.041842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:39.758 [2024-05-15 18:23:32.041858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:39.758 [2024-05-15 18:23:32.041878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:39.758 [2024-05-15 18:23:32.043418] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 198.688 ms, result 0 00:32:24.974  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (23 MBps) Copying: 97/1024 [MB] (23 MBps) Copying: 120/1024 [MB] (23 MBps) Copying: 144/1024 [MB] (24 MBps) Copying: 169/1024 [MB] (24 MBps) Copying: 193/1024 [MB] (23 MBps) Copying: 216/1024 [MB] (22 MBps) Copying: 241/1024 [MB] (24 MBps) Copying: 265/1024 [MB] (24 MBps) Copying: 290/1024 [MB] (24 MBps) Copying: 315/1024 [MB] (24 MBps) Copying: 339/1024 [MB] (24 MBps) Copying: 363/1024 [MB] (24 MBps) Copying: 388/1024 [MB] (25 MBps) Copying: 413/1024 [MB] (24 MBps) Copying: 437/1024 [MB] (24 MBps) Copying: 462/1024 [MB] (25 MBps) Copying: 487/1024 [MB] (25 MBps) Copying: 512/1024 [MB] (25 MBps) Copying: 535/1024 [MB] (23 MBps) Copying: 559/1024 [MB] (23 MBps) Copying: 582/1024 [MB] (22 MBps) Copying: 604/1024 [MB] (22 MBps) Copying: 625/1024 [MB] (21 MBps) Copying: 645/1024 [MB] (19 MBps) Copying: 665/1024 [MB] (19 MBps) Copying: 684/1024 [MB] (19 MBps) Copying: 704/1024 [MB] (19 MBps) Copying: 724/1024 [MB] (20 MBps) Copying: 745/1024 [MB] (20 MBps) Copying: 765/1024 [MB] (19 MBps) Copying: 785/1024 [MB] (20 MBps) Copying: 805/1024 [MB] (20 MBps) Copying: 827/1024 [MB] (22 MBps) Copying: 853/1024 [MB] (25 MBps) Copying: 877/1024 [MB] (23 MBps) Copying: 900/1024 [MB] (23 MBps) Copying: 924/1024 [MB] (23 MBps) Copying: 948/1024 [MB] (24 MBps) Copying: 972/1024 [MB] (23 MBps) Copying: 997/1024 [MB] (24 MBps) Copying: 1021/1024 [MB] (24 MBps) Copying: 1048392/1048576 [kB] (2092 kBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-05-15 18:24:17.303015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.974 [2024-05-15 18:24:17.303097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:24.974 [2024-05-15 18:24:17.303127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.974 [2024-05-15 18:24:17.303140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.974 [2024-05-15 18:24:17.304545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:24.974 [2024-05-15 18:24:17.310645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.974 [2024-05-15 18:24:17.310687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:24.974 [2024-05-15 18:24:17.310704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.054 ms 00:32:24.974 [2024-05-15 18:24:17.310716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.974 [2024-05-15 18:24:17.321828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.974 [2024-05-15 18:24:17.321881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:24.974 [2024-05-15 18:24:17.321899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.637 ms 00:32:24.974 [2024-05-15 18:24:17.321911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.974 [2024-05-15 18:24:17.321948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.974 [2024-05-15 18:24:17.321963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:24.974 [2024-05-15 18:24:17.321976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:24.974 [2024-05-15 18:24:17.321987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.974 [2024-05-15 18:24:17.322051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.974 [2024-05-15 18:24:17.322067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:24.974 [2024-05-15 18:24:17.322084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:24.974 [2024-05-15 18:24:17.322096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.974 [2024-05-15 18:24:17.322115] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:24.974 [2024-05-15 18:24:17.322133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130816 / 261120 wr_cnt: 1 state: open 00:32:24.974 [2024-05-15 18:24:17.322148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:24.974 [2024-05-15 18:24:17.322470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.322990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:24.975 [2024-05-15 18:24:17.323452] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:24.975 [2024-05-15 18:24:17.323464] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 17e6d8e7-7482-43f5-9327-0822f86d5edf 00:32:24.975 [2024-05-15 18:24:17.323476] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130816 00:32:24.975 [2024-05-15 18:24:17.323487] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130848 00:32:24.975 [2024-05-15 18:24:17.323499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130816 00:32:24.975 [2024-05-15 18:24:17.323511] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:24.975 [2024-05-15 18:24:17.323522] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:24.975 [2024-05-15 18:24:17.323534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:24.975 [2024-05-15 18:24:17.323550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:24.975 [2024-05-15 18:24:17.323561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:24.975 [2024-05-15 18:24:17.323571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:24.975 [2024-05-15 18:24:17.323582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.975 [2024-05-15 18:24:17.323594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:24.975 [2024-05-15 18:24:17.323606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:32:24.975 [2024-05-15 18:24:17.323618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.975 [2024-05-15 18:24:17.340669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.975 [2024-05-15 18:24:17.340720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:24.975 [2024-05-15 18:24:17.340740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.027 ms 00:32:24.975 [2024-05-15 18:24:17.340759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.975 [2024-05-15 18:24:17.341059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.975 [2024-05-15 18:24:17.341087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:24.975 [2024-05-15 18:24:17.341102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:32:24.975 [2024-05-15 18:24:17.341114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.975 [2024-05-15 18:24:17.391804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.975 [2024-05-15 18:24:17.391880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.975 [2024-05-15 18:24:17.391909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.976 [2024-05-15 18:24:17.391921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.976 [2024-05-15 18:24:17.392022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.976 [2024-05-15 18:24:17.392054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.976 [2024-05-15 18:24:17.392068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.976 [2024-05-15 18:24:17.392080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.976 [2024-05-15 18:24:17.392163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.976 [2024-05-15 18:24:17.392183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.976 [2024-05-15 18:24:17.392196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.976 [2024-05-15 18:24:17.392215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.976 [2024-05-15 18:24:17.392248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.976 [2024-05-15 18:24:17.392268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.976 [2024-05-15 18:24:17.392280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.976 [2024-05-15 18:24:17.392292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.234 [2024-05-15 18:24:17.501874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.501982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:25.235 [2024-05-15 18:24:17.502011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.502025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.546864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.546936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:25.235 [2024-05-15 18:24:17.546968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.546981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.547084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.547103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:25.235 [2024-05-15 18:24:17.547116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.547128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.547188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.547230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:25.235 [2024-05-15 18:24:17.547247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.547264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.547417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.547440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:25.235 [2024-05-15 18:24:17.547453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.547469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.547547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.547569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:25.235 [2024-05-15 18:24:17.547582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.547594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.547648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.547669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:25.235 [2024-05-15 18:24:17.547684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.547696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.547754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.235 [2024-05-15 18:24:17.547782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:25.235 [2024-05-15 18:24:17.547795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.235 [2024-05-15 18:24:17.547807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.235 [2024-05-15 18:24:17.547995] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 247.949 ms, result 0 00:32:27.158 00:32:27.159 00:32:27.159 18:24:19 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:27.159 [2024-05-15 18:24:19.482990] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:32:27.159 [2024-05-15 18:24:19.483209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86418 ] 00:32:27.159 [2024-05-15 18:24:19.659145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.417 [2024-05-15 18:24:19.897298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.984 [2024-05-15 18:24:20.248794] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:27.984 [2024-05-15 18:24:20.248918] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:27.984 [2024-05-15 18:24:20.406346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.406427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:27.984 [2024-05-15 18:24:20.406465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:27.984 [2024-05-15 18:24:20.406483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.406557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.406578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:27.984 [2024-05-15 18:24:20.406595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:27.984 [2024-05-15 18:24:20.406617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.406648] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:27.984 [2024-05-15 18:24:20.407565] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:27.984 [2024-05-15 18:24:20.407644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.407658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:27.984 [2024-05-15 18:24:20.407681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:32:27.984 [2024-05-15 18:24:20.407693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.408237] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:27.984 [2024-05-15 18:24:20.408279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.408311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:27.984 [2024-05-15 18:24:20.408328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:27.984 [2024-05-15 18:24:20.408340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.408397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.408414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:27.984 [2024-05-15 18:24:20.408431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:27.984 [2024-05-15 18:24:20.408443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.408865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.408894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:27.984 [2024-05-15 18:24:20.408908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:32:27.984 [2024-05-15 18:24:20.408920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.409002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.409030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:27.984 [2024-05-15 18:24:20.409048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:27.984 [2024-05-15 18:24:20.409060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.409091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.409106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:27.984 [2024-05-15 18:24:20.409119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:27.984 [2024-05-15 18:24:20.409130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.409158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:27.984 [2024-05-15 18:24:20.414565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.414619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:27.984 [2024-05-15 18:24:20.414652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.413 ms 00:32:27.984 [2024-05-15 18:24:20.414663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.984 [2024-05-15 18:24:20.414700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.984 [2024-05-15 18:24:20.414722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:27.984 [2024-05-15 18:24:20.414735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:27.985 [2024-05-15 18:24:20.414754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.985 [2024-05-15 18:24:20.414815] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:27.985 [2024-05-15 18:24:20.414846] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:32:27.985 [2024-05-15 18:24:20.414885] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:27.985 [2024-05-15 18:24:20.414904] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:32:27.985 [2024-05-15 18:24:20.414986] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:32:27.985 [2024-05-15 18:24:20.415001] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:27.985 [2024-05-15 18:24:20.415015] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:32:27.985 [2024-05-15 18:24:20.415035] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415053] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415064] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:27.985 [2024-05-15 18:24:20.415075] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:27.985 [2024-05-15 18:24:20.415086] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:32:27.985 [2024-05-15 18:24:20.415097] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:32:27.985 [2024-05-15 18:24:20.415109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.985 [2024-05-15 18:24:20.415120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:27.985 [2024-05-15 18:24:20.415137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:32:27.985 [2024-05-15 18:24:20.415148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.985 [2024-05-15 18:24:20.415222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.985 [2024-05-15 18:24:20.415244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:27.985 [2024-05-15 18:24:20.415256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:27.985 [2024-05-15 18:24:20.415266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.985 [2024-05-15 18:24:20.415368] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:27.985 [2024-05-15 18:24:20.415392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:27.985 [2024-05-15 18:24:20.415405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:27.985 [2024-05-15 18:24:20.415455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:27.985 [2024-05-15 18:24:20.415487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:27.985 [2024-05-15 18:24:20.415508] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:27.985 [2024-05-15 18:24:20.415518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:27.985 [2024-05-15 18:24:20.415528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:27.985 [2024-05-15 18:24:20.415539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:27.985 [2024-05-15 18:24:20.415549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:32:27.985 [2024-05-15 18:24:20.415559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415574] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:27.985 [2024-05-15 18:24:20.415606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:32:27.985 [2024-05-15 18:24:20.415618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:32:27.985 [2024-05-15 18:24:20.415640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:32:27.985 [2024-05-15 18:24:20.415650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:27.985 [2024-05-15 18:24:20.415676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:27.985 [2024-05-15 18:24:20.415708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:27.985 [2024-05-15 18:24:20.415739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:27.985 [2024-05-15 18:24:20.415769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:27.985 [2024-05-15 18:24:20.415800] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:27.985 [2024-05-15 18:24:20.415820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:27.985 [2024-05-15 18:24:20.415830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:32:27.985 [2024-05-15 18:24:20.415840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:27.985 [2024-05-15 18:24:20.415850] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:27.985 [2024-05-15 18:24:20.415861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:27.985 [2024-05-15 18:24:20.415872] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.985 [2024-05-15 18:24:20.415895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:27.985 [2024-05-15 18:24:20.415905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:27.985 [2024-05-15 18:24:20.415916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:27.985 [2024-05-15 18:24:20.415927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:27.985 [2024-05-15 18:24:20.415937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:27.985 [2024-05-15 18:24:20.415947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:27.985 [2024-05-15 18:24:20.415961] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:27.985 [2024-05-15 18:24:20.415981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:27.985 [2024-05-15 18:24:20.415993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:27.985 [2024-05-15 18:24:20.416005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:32:27.985 [2024-05-15 18:24:20.416016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:32:27.985 [2024-05-15 18:24:20.416039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:32:27.985 [2024-05-15 18:24:20.416052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:32:27.985 [2024-05-15 18:24:20.416064] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:32:27.985 [2024-05-15 18:24:20.416075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:32:27.985 [2024-05-15 18:24:20.416086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:32:27.985 [2024-05-15 18:24:20.416098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:32:27.985 [2024-05-15 18:24:20.416109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:32:27.985 [2024-05-15 18:24:20.416120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:32:27.985 [2024-05-15 18:24:20.416131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:32:27.985 [2024-05-15 18:24:20.416143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:32:27.985 [2024-05-15 18:24:20.416154] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:27.985 [2024-05-15 18:24:20.416166] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:27.985 [2024-05-15 18:24:20.416178] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:27.985 [2024-05-15 18:24:20.416190] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:27.985 [2024-05-15 18:24:20.416201] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:27.985 [2024-05-15 18:24:20.416213] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:27.985 [2024-05-15 18:24:20.416226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.985 [2024-05-15 18:24:20.416237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:27.985 [2024-05-15 18:24:20.416253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:32:27.986 [2024-05-15 18:24:20.416265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.986 [2024-05-15 18:24:20.434647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.986 [2024-05-15 18:24:20.434723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:27.986 [2024-05-15 18:24:20.434767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.311 ms 00:32:27.986 [2024-05-15 18:24:20.434779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.986 [2024-05-15 18:24:20.434873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.986 [2024-05-15 18:24:20.434889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:27.986 [2024-05-15 18:24:20.434911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:27.986 [2024-05-15 18:24:20.434921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.243 [2024-05-15 18:24:20.486032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.243 [2024-05-15 18:24:20.486118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:28.243 [2024-05-15 18:24:20.486172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.029 ms 00:32:28.243 [2024-05-15 18:24:20.486185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.243 [2024-05-15 18:24:20.486263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.243 [2024-05-15 18:24:20.486280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:28.243 [2024-05-15 18:24:20.486293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:28.243 [2024-05-15 18:24:20.486304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.243 [2024-05-15 18:24:20.486511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.243 [2024-05-15 18:24:20.486536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:28.243 [2024-05-15 18:24:20.486550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:32:28.243 [2024-05-15 18:24:20.486562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.243 [2024-05-15 18:24:20.486697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.243 [2024-05-15 18:24:20.486727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:28.243 [2024-05-15 18:24:20.486742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:32:28.243 [2024-05-15 18:24:20.486754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.243 [2024-05-15 18:24:20.507204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.243 [2024-05-15 18:24:20.507333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:28.243 [2024-05-15 18:24:20.507362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.420 ms 00:32:28.243 [2024-05-15 18:24:20.507375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.243 [2024-05-15 18:24:20.507603] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:28.243 [2024-05-15 18:24:20.507626] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:28.243 [2024-05-15 18:24:20.507639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.243 [2024-05-15 18:24:20.507676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:28.243 [2024-05-15 18:24:20.507689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:32:28.244 [2024-05-15 18:24:20.507716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.520350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.520428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:28.244 [2024-05-15 18:24:20.520461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.612 ms 00:32:28.244 [2024-05-15 18:24:20.520477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.520651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.520683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:28.244 [2024-05-15 18:24:20.520711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:32:28.244 [2024-05-15 18:24:20.520722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.520779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.520798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:28.244 [2024-05-15 18:24:20.520811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:28.244 [2024-05-15 18:24:20.520832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.521239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.521267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:28.244 [2024-05-15 18:24:20.521281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:32:28.244 [2024-05-15 18:24:20.521328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.521357] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:28.244 [2024-05-15 18:24:20.521375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.521387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:28.244 [2024-05-15 18:24:20.521399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:28.244 [2024-05-15 18:24:20.521410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.535890] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:28.244 [2024-05-15 18:24:20.536128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.536156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:28.244 [2024-05-15 18:24:20.536178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.692 ms 00:32:28.244 [2024-05-15 18:24:20.536190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.538672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.538723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:28.244 [2024-05-15 18:24:20.538755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:32:28.244 [2024-05-15 18:24:20.538766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.538873] mngt/ftl_mngt_band.c: 413:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:28.244 [2024-05-15 18:24:20.539196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.539234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:28.244 [2024-05-15 18:24:20.539249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:32:28.244 [2024-05-15 18:24:20.539260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.540750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.540802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:32:28.244 [2024-05-15 18:24:20.540828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:32:28.244 [2024-05-15 18:24:20.540839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.540887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.540908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:28.244 [2024-05-15 18:24:20.540924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:28.244 [2024-05-15 18:24:20.540935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.540995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:28.244 [2024-05-15 18:24:20.541013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.541034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:28.244 [2024-05-15 18:24:20.541045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:28.244 [2024-05-15 18:24:20.541057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.572249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.572316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:28.244 [2024-05-15 18:24:20.572336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.162 ms 00:32:28.244 [2024-05-15 18:24:20.572348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.572432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.244 [2024-05-15 18:24:20.572452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:28.244 [2024-05-15 18:24:20.572464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:28.244 [2024-05-15 18:24:20.572475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.244 [2024-05-15 18:24:20.580766] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 171.908 ms, result 0 00:33:10.215  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (24 MBps) Copying: 126/1024 [MB] (25 MBps) Copying: 151/1024 [MB] (24 MBps) Copying: 176/1024 [MB] (25 MBps) Copying: 200/1024 [MB] (24 MBps) Copying: 225/1024 [MB] (24 MBps) Copying: 248/1024 [MB] (23 MBps) Copying: 272/1024 [MB] (23 MBps) Copying: 296/1024 [MB] (24 MBps) Copying: 322/1024 [MB] (25 MBps) Copying: 347/1024 [MB] (25 MBps) Copying: 373/1024 [MB] (26 MBps) Copying: 397/1024 [MB] (24 MBps) Copying: 420/1024 [MB] (22 MBps) Copying: 444/1024 [MB] (24 MBps) Copying: 469/1024 [MB] (24 MBps) Copying: 494/1024 [MB] (24 MBps) Copying: 517/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (25 MBps) Copying: 566/1024 [MB] (23 MBps) Copying: 590/1024 [MB] (23 MBps) Copying: 615/1024 [MB] (24 MBps) Copying: 641/1024 [MB] (25 MBps) Copying: 666/1024 [MB] (24 MBps) Copying: 690/1024 [MB] (24 MBps) Copying: 716/1024 [MB] (25 MBps) Copying: 741/1024 [MB] (25 MBps) Copying: 765/1024 [MB] (24 MBps) Copying: 791/1024 [MB] (25 MBps) Copying: 816/1024 [MB] (25 MBps) Copying: 842/1024 [MB] (25 MBps) Copying: 865/1024 [MB] (23 MBps) Copying: 889/1024 [MB] (23 MBps) Copying: 913/1024 [MB] (24 MBps) Copying: 938/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (24 MBps) Copying: 987/1024 [MB] (24 MBps) Copying: 1011/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-05-15 18:25:02.494758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.215 [2024-05-15 18:25:02.494889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:10.215 [2024-05-15 18:25:02.494916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:10.215 [2024-05-15 18:25:02.494943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.215 [2024-05-15 18:25:02.494983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:10.215 [2024-05-15 18:25:02.499756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.215 [2024-05-15 18:25:02.499801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:10.215 [2024-05-15 18:25:02.499815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:33:10.215 [2024-05-15 18:25:02.499826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.215 [2024-05-15 18:25:02.500172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.215 [2024-05-15 18:25:02.500207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:10.215 [2024-05-15 18:25:02.500222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:33:10.215 [2024-05-15 18:25:02.500233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.215 [2024-05-15 18:25:02.500279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.215 [2024-05-15 18:25:02.500307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:10.215 [2024-05-15 18:25:02.500322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:10.215 [2024-05-15 18:25:02.500334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.215 [2024-05-15 18:25:02.500410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.215 [2024-05-15 18:25:02.500425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:10.215 [2024-05-15 18:25:02.500438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:10.215 [2024-05-15 18:25:02.500465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.215 [2024-05-15 18:25:02.500485] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:10.215 [2024-05-15 18:25:02.500503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 134144 / 261120 wr_cnt: 1 state: open 00:33:10.215 [2024-05-15 18:25:02.500518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:10.215 [2024-05-15 18:25:02.500531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:10.215 [2024-05-15 18:25:02.500544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:10.215 [2024-05-15 18:25:02.500556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.500990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:10.216 [2024-05-15 18:25:02.501777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:10.217 [2024-05-15 18:25:02.501789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:10.217 [2024-05-15 18:25:02.501816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:10.217 [2024-05-15 18:25:02.501847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:10.217 [2024-05-15 18:25:02.501868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:10.217 [2024-05-15 18:25:02.501880] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 17e6d8e7-7482-43f5-9327-0822f86d5edf 00:33:10.217 [2024-05-15 18:25:02.501892] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 134144 00:33:10.217 [2024-05-15 18:25:02.501902] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3360 00:33:10.217 [2024-05-15 18:25:02.501913] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:33:10.217 [2024-05-15 18:25:02.501925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:33:10.217 [2024-05-15 18:25:02.501936] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:10.217 [2024-05-15 18:25:02.501948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:10.217 [2024-05-15 18:25:02.501964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:10.217 [2024-05-15 18:25:02.501974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:10.217 [2024-05-15 18:25:02.501984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:10.217 [2024-05-15 18:25:02.501994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.217 [2024-05-15 18:25:02.502005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:10.217 [2024-05-15 18:25:02.502017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:33:10.217 [2024-05-15 18:25:02.502028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.217 [2024-05-15 18:25:02.521571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.217 [2024-05-15 18:25:02.521617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:10.217 [2024-05-15 18:25:02.521634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.519 ms 00:33:10.217 [2024-05-15 18:25:02.521646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.217 [2024-05-15 18:25:02.522036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.217 [2024-05-15 18:25:02.522090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:10.217 [2024-05-15 18:25:02.522120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:33:10.217 [2024-05-15 18:25:02.522133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.217 [2024-05-15 18:25:02.574433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.217 [2024-05-15 18:25:02.574499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:10.217 [2024-05-15 18:25:02.574524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.217 [2024-05-15 18:25:02.574536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.217 [2024-05-15 18:25:02.574618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.217 [2024-05-15 18:25:02.574634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:10.217 [2024-05-15 18:25:02.574646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.217 [2024-05-15 18:25:02.574658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.217 [2024-05-15 18:25:02.574744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.217 [2024-05-15 18:25:02.574764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:10.217 [2024-05-15 18:25:02.574777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.217 [2024-05-15 18:25:02.574795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.217 [2024-05-15 18:25:02.574818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.217 [2024-05-15 18:25:02.574832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:10.217 [2024-05-15 18:25:02.574844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.217 [2024-05-15 18:25:02.574856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.217 [2024-05-15 18:25:02.693250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.217 [2024-05-15 18:25:02.693339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:10.217 [2024-05-15 18:25:02.693367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.217 [2024-05-15 18:25:02.693380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.735621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.475 [2024-05-15 18:25:02.735697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:10.475 [2024-05-15 18:25:02.735717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.475 [2024-05-15 18:25:02.735729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.735830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.475 [2024-05-15 18:25:02.735849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:10.475 [2024-05-15 18:25:02.735862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.475 [2024-05-15 18:25:02.735873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.735933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.475 [2024-05-15 18:25:02.735949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:10.475 [2024-05-15 18:25:02.735961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.475 [2024-05-15 18:25:02.735973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.736094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.475 [2024-05-15 18:25:02.736114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:10.475 [2024-05-15 18:25:02.736126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.475 [2024-05-15 18:25:02.736137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.736177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.475 [2024-05-15 18:25:02.736202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:10.475 [2024-05-15 18:25:02.736215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.475 [2024-05-15 18:25:02.736226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.736278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.475 [2024-05-15 18:25:02.736314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:10.475 [2024-05-15 18:25:02.736330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.475 [2024-05-15 18:25:02.736341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.736411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:10.475 [2024-05-15 18:25:02.736429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:10.475 [2024-05-15 18:25:02.736441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:10.475 [2024-05-15 18:25:02.736452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.475 [2024-05-15 18:25:02.736619] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 241.829 ms, result 0 00:33:11.849 00:33:11.849 00:33:11.849 18:25:04 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:14.421 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:14.421 Process with pid 84847 is not found 00:33:14.421 Remove shared memory files 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84847 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 84847 ']' 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 84847 00:33:14.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (84847) - No such process 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 84847 is not found' 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_band_md /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_l2p_l1 /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_l2p_l2 /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_l2p_l2_ctx /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_nvc_md /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_p2l_pool /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_sb /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_sb_shm /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_trim_bitmap /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_trim_md /dev/hugepages/ftl_17e6d8e7-7482-43f5-9327-0822f86d5edf_vmap 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:14.421 00:33:14.421 real 3m23.433s 00:33:14.421 user 3m8.517s 00:33:14.421 sys 0m16.980s 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:14.421 ************************************ 00:33:14.421 END TEST ftl_restore_fast 00:33:14.421 ************************************ 00:33:14.421 18:25:06 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:14.421 18:25:06 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:14.421 18:25:06 ftl -- ftl/ftl.sh@14 -- # killprocess 76919 00:33:14.421 18:25:06 ftl -- common/autotest_common.sh@946 -- # '[' -z 76919 ']' 00:33:14.421 18:25:06 ftl -- common/autotest_common.sh@950 -- # kill -0 76919 00:33:14.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76919) - No such process 00:33:14.422 Process with pid 76919 is not found 00:33:14.422 18:25:06 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 76919 is not found' 00:33:14.422 18:25:06 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:14.422 18:25:06 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86891 00:33:14.422 18:25:06 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:14.422 18:25:06 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86891 00:33:14.422 18:25:06 ftl -- common/autotest_common.sh@827 -- # '[' -z 86891 ']' 00:33:14.422 18:25:06 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.422 18:25:06 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:14.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.422 18:25:06 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.422 18:25:06 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:14.422 18:25:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:14.422 [2024-05-15 18:25:06.727139] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:33:14.422 [2024-05-15 18:25:06.727327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86891 ] 00:33:14.682 [2024-05-15 18:25:06.903025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.940 [2024-05-15 18:25:07.213015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.877 18:25:08 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:15.877 18:25:08 ftl -- common/autotest_common.sh@860 -- # return 0 00:33:15.877 18:25:08 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:16.135 nvme0n1 00:33:16.135 18:25:08 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:16.135 18:25:08 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:16.135 18:25:08 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:16.393 18:25:08 ftl -- ftl/common.sh@28 -- # stores=169e95ce-24e0-48ef-a88c-41168f029f61 00:33:16.393 18:25:08 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:16.393 18:25:08 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 169e95ce-24e0-48ef-a88c-41168f029f61 00:33:16.651 18:25:09 ftl -- ftl/ftl.sh@23 -- # killprocess 86891 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@946 -- # '[' -z 86891 ']' 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@950 -- # kill -0 86891 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@951 -- # uname 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86891 00:33:16.651 killing process with pid 86891 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86891' 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@965 -- # kill 86891 00:33:16.651 18:25:09 ftl -- common/autotest_common.sh@970 -- # wait 86891 00:33:19.184 18:25:11 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:19.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:19.467 Waiting for block devices as requested 00:33:19.467 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.467 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.724 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:19.724 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:24.991 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:24.991 Remove shared memory files 00:33:24.991 18:25:17 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:24.991 18:25:17 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:24.991 18:25:17 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:24.991 18:25:17 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:24.991 18:25:17 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:24.991 18:25:17 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:24.991 18:25:17 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:24.991 ************************************ 00:33:24.991 END TEST ftl 00:33:24.991 ************************************ 00:33:24.991 00:33:24.991 real 15m26.380s 00:33:24.991 user 18m3.794s 00:33:24.991 sys 1m52.000s 00:33:24.991 18:25:17 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:24.991 18:25:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:24.991 18:25:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:24.991 18:25:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:24.991 18:25:17 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:33:24.991 18:25:17 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:24.991 18:25:17 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:33:24.991 18:25:17 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:24.991 18:25:17 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:24.991 18:25:17 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:24.991 18:25:17 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:33:24.991 18:25:17 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:33:24.991 18:25:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:24.991 18:25:17 -- common/autotest_common.sh@10 -- # set +x 00:33:24.991 18:25:17 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:33:24.991 18:25:17 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:33:24.991 18:25:17 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:33:24.991 18:25:17 -- common/autotest_common.sh@10 -- # set +x 00:33:26.368 INFO: APP EXITING 00:33:26.368 INFO: killing all VMs 00:33:26.368 INFO: killing vhost app 00:33:26.368 INFO: EXIT DONE 00:33:26.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:27.195 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:27.195 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:27.195 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:27.195 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:27.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:28.086 Cleaning 00:33:28.086 Removing: /var/run/dpdk/spdk0/config 00:33:28.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:28.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:28.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:28.086 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:28.086 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:28.086 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:28.086 Removing: /var/run/dpdk/spdk0 00:33:28.086 Removing: /var/run/dpdk/spdk_pid61130 00:33:28.086 Removing: /var/run/dpdk/spdk_pid61357 00:33:28.086 Removing: /var/run/dpdk/spdk_pid61578 00:33:28.086 Removing: /var/run/dpdk/spdk_pid61682 00:33:28.086 Removing: /var/run/dpdk/spdk_pid61733 00:33:28.086 Removing: /var/run/dpdk/spdk_pid61861 00:33:28.086 Removing: /var/run/dpdk/spdk_pid61890 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62065 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62169 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62262 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62376 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62476 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62516 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62558 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62626 00:33:28.086 Removing: /var/run/dpdk/spdk_pid62721 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63184 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63259 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63329 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63349 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63493 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63509 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63660 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63682 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63750 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63769 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63833 00:33:28.086 Removing: /var/run/dpdk/spdk_pid63857 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64044 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64086 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64167 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64237 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64279 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64352 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64398 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64445 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64491 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64538 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64590 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64631 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64683 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64724 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64775 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64817 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64864 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64910 00:33:28.086 Removing: /var/run/dpdk/spdk_pid64957 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65003 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65050 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65097 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65146 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65196 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65242 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65290 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65372 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65489 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65656 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65757 00:33:28.086 Removing: /var/run/dpdk/spdk_pid65799 00:33:28.086 Removing: /var/run/dpdk/spdk_pid66266 00:33:28.086 Removing: /var/run/dpdk/spdk_pid66364 00:33:28.086 Removing: /var/run/dpdk/spdk_pid66475 00:33:28.086 Removing: /var/run/dpdk/spdk_pid66539 00:33:28.086 Removing: /var/run/dpdk/spdk_pid66570 00:33:28.086 Removing: /var/run/dpdk/spdk_pid66646 00:33:28.086 Removing: /var/run/dpdk/spdk_pid67279 00:33:28.344 Removing: /var/run/dpdk/spdk_pid67321 00:33:28.344 Removing: /var/run/dpdk/spdk_pid67831 00:33:28.344 Removing: /var/run/dpdk/spdk_pid67935 00:33:28.344 Removing: /var/run/dpdk/spdk_pid68058 00:33:28.344 Removing: /var/run/dpdk/spdk_pid68111 00:33:28.344 Removing: /var/run/dpdk/spdk_pid68142 00:33:28.344 Removing: /var/run/dpdk/spdk_pid68172 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70027 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70174 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70179 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70191 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70237 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70241 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70253 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70298 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70302 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70314 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70359 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70363 00:33:28.344 Removing: /var/run/dpdk/spdk_pid70375 00:33:28.344 Removing: /var/run/dpdk/spdk_pid71728 00:33:28.344 Removing: /var/run/dpdk/spdk_pid71827 00:33:28.344 Removing: /var/run/dpdk/spdk_pid72732 00:33:28.344 Removing: /var/run/dpdk/spdk_pid73094 00:33:28.344 Removing: /var/run/dpdk/spdk_pid73232 00:33:28.344 Removing: /var/run/dpdk/spdk_pid73364 00:33:28.344 Removing: /var/run/dpdk/spdk_pid73496 00:33:28.344 Removing: /var/run/dpdk/spdk_pid73645 00:33:28.344 Removing: /var/run/dpdk/spdk_pid73725 00:33:28.344 Removing: /var/run/dpdk/spdk_pid73865 00:33:28.344 Removing: /var/run/dpdk/spdk_pid74146 00:33:28.344 Removing: /var/run/dpdk/spdk_pid74188 00:33:28.344 Removing: /var/run/dpdk/spdk_pid74665 00:33:28.344 Removing: /var/run/dpdk/spdk_pid74852 00:33:28.344 Removing: /var/run/dpdk/spdk_pid74957 00:33:28.344 Removing: /var/run/dpdk/spdk_pid75069 00:33:28.344 Removing: /var/run/dpdk/spdk_pid75134 00:33:28.344 Removing: /var/run/dpdk/spdk_pid75159 00:33:28.344 Removing: /var/run/dpdk/spdk_pid75446 00:33:28.344 Removing: /var/run/dpdk/spdk_pid75506 00:33:28.344 Removing: /var/run/dpdk/spdk_pid75590 00:33:28.344 Removing: /var/run/dpdk/spdk_pid75983 00:33:28.344 Removing: /var/run/dpdk/spdk_pid76129 00:33:28.344 Removing: /var/run/dpdk/spdk_pid76919 00:33:28.344 Removing: /var/run/dpdk/spdk_pid77054 00:33:28.344 Removing: /var/run/dpdk/spdk_pid77253 00:33:28.344 Removing: /var/run/dpdk/spdk_pid77360 00:33:28.344 Removing: /var/run/dpdk/spdk_pid77709 00:33:28.344 Removing: /var/run/dpdk/spdk_pid77979 00:33:28.344 Removing: /var/run/dpdk/spdk_pid78344 00:33:28.344 Removing: /var/run/dpdk/spdk_pid78543 00:33:28.344 Removing: /var/run/dpdk/spdk_pid78679 00:33:28.344 Removing: /var/run/dpdk/spdk_pid78748 00:33:28.344 Removing: /var/run/dpdk/spdk_pid78892 00:33:28.344 Removing: /var/run/dpdk/spdk_pid78928 00:33:28.344 Removing: /var/run/dpdk/spdk_pid78998 00:33:28.344 Removing: /var/run/dpdk/spdk_pid79204 00:33:28.344 Removing: /var/run/dpdk/spdk_pid79451 00:33:28.344 Removing: /var/run/dpdk/spdk_pid79887 00:33:28.344 Removing: /var/run/dpdk/spdk_pid80347 00:33:28.344 Removing: /var/run/dpdk/spdk_pid80767 00:33:28.344 Removing: /var/run/dpdk/spdk_pid81262 00:33:28.344 Removing: /var/run/dpdk/spdk_pid81404 00:33:28.344 Removing: /var/run/dpdk/spdk_pid81510 00:33:28.344 Removing: /var/run/dpdk/spdk_pid82187 00:33:28.344 Removing: /var/run/dpdk/spdk_pid82280 00:33:28.344 Removing: /var/run/dpdk/spdk_pid82740 00:33:28.344 Removing: /var/run/dpdk/spdk_pid83156 00:33:28.344 Removing: /var/run/dpdk/spdk_pid83699 00:33:28.344 Removing: /var/run/dpdk/spdk_pid83845 00:33:28.344 Removing: /var/run/dpdk/spdk_pid83898 00:33:28.344 Removing: /var/run/dpdk/spdk_pid83969 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84036 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84106 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84341 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84384 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84451 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84525 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84564 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84645 00:33:28.344 Removing: /var/run/dpdk/spdk_pid84847 00:33:28.344 Removing: /var/run/dpdk/spdk_pid85089 00:33:28.344 Removing: /var/run/dpdk/spdk_pid85499 00:33:28.344 Removing: /var/run/dpdk/spdk_pid85938 00:33:28.344 Removing: /var/run/dpdk/spdk_pid86418 00:33:28.344 Removing: /var/run/dpdk/spdk_pid86891 00:33:28.344 Clean 00:33:28.602 18:25:20 -- common/autotest_common.sh@1447 -- # return 0 00:33:28.602 18:25:20 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:33:28.602 18:25:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.602 18:25:20 -- common/autotest_common.sh@10 -- # set +x 00:33:28.602 18:25:20 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:33:28.602 18:25:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.602 18:25:20 -- common/autotest_common.sh@10 -- # set +x 00:33:28.602 18:25:20 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:28.602 18:25:20 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:28.602 18:25:20 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:28.602 18:25:20 -- spdk/autotest.sh@387 -- # hash lcov 00:33:28.602 18:25:20 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:28.602 18:25:20 -- spdk/autotest.sh@389 -- # hostname 00:33:28.602 18:25:20 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:28.860 geninfo: WARNING: invalid characters removed from testname! 00:33:55.397 18:25:47 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:00.704 18:25:52 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:03.264 18:25:55 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:05.794 18:25:58 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:09.132 18:26:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:11.661 18:26:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:15.017 18:26:07 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:15.017 18:26:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:15.017 18:26:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:15.017 18:26:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.017 18:26:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.017 18:26:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.017 18:26:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.017 18:26:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.017 18:26:07 -- paths/export.sh@5 -- $ export PATH 00:34:15.017 18:26:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.017 18:26:07 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:15.017 18:26:07 -- common/autobuild_common.sh@437 -- $ date +%s 00:34:15.017 18:26:07 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715797567.XXXXXX 00:34:15.017 18:26:07 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715797567.EIzij8 00:34:15.017 18:26:07 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:34:15.017 18:26:07 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:34:15.017 18:26:07 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:15.017 18:26:07 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:15.017 18:26:07 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:15.017 18:26:07 -- common/autobuild_common.sh@453 -- $ get_config_params 00:34:15.017 18:26:07 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:34:15.017 18:26:07 -- common/autotest_common.sh@10 -- $ set +x 00:34:15.017 18:26:07 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:34:15.017 18:26:07 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:34:15.017 18:26:07 -- pm/common@17 -- $ local monitor 00:34:15.017 18:26:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.017 18:26:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.017 18:26:07 -- pm/common@25 -- $ sleep 1 00:34:15.017 18:26:07 -- pm/common@21 -- $ date +%s 00:34:15.017 18:26:07 -- pm/common@21 -- $ date +%s 00:34:15.017 18:26:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715797567 00:34:15.017 18:26:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715797567 00:34:15.017 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715797567_collect-vmstat.pm.log 00:34:15.017 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715797567_collect-cpu-load.pm.log 00:34:15.953 18:26:08 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:34:15.953 18:26:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:15.953 18:26:08 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:15.953 18:26:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:15.953 18:26:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:15.953 18:26:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:15.953 18:26:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:15.953 18:26:08 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:15.953 18:26:08 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:15.953 18:26:08 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:15.953 18:26:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:15.953 18:26:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:15.953 18:26:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:15.953 18:26:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:15.953 18:26:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.953 18:26:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:15.953 18:26:08 -- pm/common@44 -- $ pid=88589 00:34:15.953 18:26:08 -- pm/common@50 -- $ kill -TERM 88589 00:34:15.953 18:26:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.953 18:26:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:15.953 18:26:08 -- pm/common@44 -- $ pid=88591 00:34:15.953 18:26:08 -- pm/common@50 -- $ kill -TERM 88591 00:34:15.953 + [[ -n 5131 ]] 00:34:15.953 + sudo kill 5131 00:34:15.962 [Pipeline] } 00:34:15.980 [Pipeline] // timeout 00:34:15.985 [Pipeline] } 00:34:16.001 [Pipeline] // stage 00:34:16.005 [Pipeline] } 00:34:16.022 [Pipeline] // catchError 00:34:16.030 [Pipeline] stage 00:34:16.032 [Pipeline] { (Stop VM) 00:34:16.047 [Pipeline] sh 00:34:16.325 + vagrant halt 00:34:20.512 ==> default: Halting domain... 00:34:27.086 [Pipeline] sh 00:34:27.364 + vagrant destroy -f 00:34:31.553 ==> default: Removing domain... 00:34:31.824 [Pipeline] sh 00:34:32.103 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:32.112 [Pipeline] } 00:34:32.130 [Pipeline] // stage 00:34:32.136 [Pipeline] } 00:34:32.152 [Pipeline] // dir 00:34:32.157 [Pipeline] } 00:34:32.172 [Pipeline] // wrap 00:34:32.177 [Pipeline] } 00:34:32.192 [Pipeline] // catchError 00:34:32.198 [Pipeline] stage 00:34:32.200 [Pipeline] { (Epilogue) 00:34:32.211 [Pipeline] sh 00:34:32.491 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:39.101 [Pipeline] catchError 00:34:39.103 [Pipeline] { 00:34:39.117 [Pipeline] sh 00:34:39.398 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:39.657 Artifacts sizes are good 00:34:39.666 [Pipeline] } 00:34:39.683 [Pipeline] // catchError 00:34:39.695 [Pipeline] archiveArtifacts 00:34:39.702 Archiving artifacts 00:34:39.842 [Pipeline] cleanWs 00:34:39.852 [WS-CLEANUP] Deleting project workspace... 00:34:39.852 [WS-CLEANUP] Deferred wipeout is used... 00:34:39.858 [WS-CLEANUP] done 00:34:39.860 [Pipeline] } 00:34:39.878 [Pipeline] // stage 00:34:39.883 [Pipeline] } 00:34:39.902 [Pipeline] // node 00:34:39.907 [Pipeline] End of Pipeline 00:34:39.933 Finished: SUCCESS